Capabilities Bases Assessment Guide
Capabilities Bases Assessment Guide
Capabilities Bases Assessment Guide
Users Guide
Version 2
Foreword
The Joint Capabilities Integration and Development System (JCIDS) was established in 2003 to
overcome several shortcomings in the existing requirements process. While much has been published
on administering JCIDS, there was a lack of written advice on how to assess the DODs needs and
recommend solutions in terms of operational capabilities.
The initial version of this paper (January 2006) was the first document to offer practical advice on
how to conduct such an assessment. Since that version was released, the instructions and manuals
governing JCIDS have been revised, and the DOD has done many more Capabilities-Based
Assessments (CBAs). This update addresses both the regulatory changes and what we have learned
about doing these analyses.
This document does three things: first, it advises an action officer on how to organize and execute a
CBA; second, it connects the CBA process to both the overarching strategic guidance and the proven
analytical methods available in the DOD; and third, it is readable. As a result, this paper discusses
bureaucratic realities that would not be addressed in an instruction, points out the occasional area
where strategic guidance is immature, inconsistent, or conflicting, and uses an informal style aimed at
engaging the reader.
This guide will provide you with a great deal of advice on how to assemble an assessment that meets
the aims of JCIDS. While the guide is not directive or prescriptive, it captures important lessons from
the CBAs conducted to date, and discusses the techniques and practices that have worked. Doing a
good CBA is difficult, and this guide will not change that. But, a CBA should not be mystifying, and
this paper is aimed at demystifying the inputs, best practices, and desired outcomes of such an
assessment.
An electronic version of this guide is available on the unclassified J-7 Joint Experimentation, Concept
Development, and Training web site (http://www.dtic.mil/futurejointwarfare/), as well as the Joint
Requirement Oversight Councils classified Knowledge Management and Development System
(https://jrockmds1.js.smil.mil/guestjrcz/gbase.guesthome).
Table of Contents
1.
2.
3.
4.
5.
6.
7.
8.
OPERATIONAL DEPICTION........................................................................................................... 37
CHOOSING AN ANALYTICAL APPROACH..................................................................................... 40
COLLECTING AND INSPECTING PERFORMANCE DATA ................................................................ 44
EXECUTING THE ANALYTICAL APPROACH ................................................................................. 45
EXTRACTING AND REPORTING NEEDS ........................................................................................ 45
VISION AND REALITY IN STATING NEEDS................................................................................... 48
THE OVERALL FNA PROCESS ..................................................................................................... 48
8.3.
8.4.
8.5.
8.6.
8.7.
9.
10.
11.
REFERENCES.............................................................................................................................. 73
12.
13.
13.1.
13.2.
13.3.
13.4.
13.5.
13.6.
13.7.
BACKGROUND ......................................................................................................................... 77
FORMING THE CBA TEAM ...................................................................................................... 78
INITIAL PLANNING AND SCHEDULING .................................................................................... 79
TEAM EVOLUTION ................................................................................................................... 80
METHODOLOGY AND EXECUTION ........................................................................................... 81
OBSERVATIONS ....................................................................................................................... 86
REFERENCES............................................................................................................................ 86
Table of Figures
FIGURE 1. MEMO FROM THE SECRETARY OF DEFENSE THAT BEGAN JCIDS................................................ 5
FIGURE 2. JCIDS ANALYSIS PROCESS. .......................................................................................................... 6
FIGURE 3. SIMPLIFIED DIAGRAM OF MAJOR CBA INPUTS, ANALYSES, AND OUTPUTS.................................. 7
FIGURE 4. DOCUMENTS COMPRISING THE JOINT OPERATIONS CONCEPTS. ................................................ 10
FIGURE 5. RELATIONSHIPS OF KEY STRATEGIC DOCUMENTS. ................................................................... 12
FIGURE 6. NOTIONAL ORGANIZATION MATRIX FOR A CBA........................................................................ 16
FIGURE 7. TASK RELATIONSHIPS AND OVERLAP FOR A CBA...................................................................... 20
FIGURE 8. NOTIONAL VALUE FUNCTION FOR EXPECTED PERSONNEL LOSSES. ........................................... 35
FIGURE 9. MAJOR FAA TASKS AND FLOW. ................................................................................................. 36
FIGURE 10. TASK FLOW FOR A RENDITION OPERATION FROM THE GLOBAL STRIKE RAID SCENARIO CBA.
............................................................................................................................................................ 38
FIGURE 11. DERIVATION OF THE COMMON JOINT ENGAGEMENT SEQUENCE FOR THE IAMD CBA. ......... 39
FIGURE 12. TRANSITION DIAGRAM FOR LEADERSHIP TARGETS FROM THE GLOBAL STRIKE RAID
SCENARIO CBA................................................................................................................................... 40
FIGURE 13. CLASSIFICATION OF MODELS BY WARFIGHTING SCOPE. .......................................................... 41
FIGURE 14. A CLASSIFICATION SCHEME FOR ANALYSIS APPROACHES. ...................................................... 41
FIGURE 15. OVERALL FNA TASK FLOW...................................................................................................... 49
FIGURE 16. OVERALL FSA TASK FLOW. ..................................................................................................... 60
FIGURE 17. EXAMPLE QUICK TURN CBA FAA TASK FLOW....................................................................... 67
FIGURE 18. EXAMPLE QUICK TURN CBA FNA TASK FLOW....................................................................... 68
FIGURE 19. EXAMPLE QUICK TURN CBA FSA TASK FLOW. ...................................................................... 69
TABLE A.1. CANDIDATE BIOMETRICS USE CASES, WITH CATEGORIES AND SUBCATEGORIES. ................... 82
TABLE A.2. THE PAIRWISE COMPARISON OF THE REVISED SET OF CAPABILITY GAPS. ............................... 83
TABLE A.3. THE TOP FIVE CAPABILITY GAPS FOR THE SHARE CATEGORY. ................................................ 85
Gen. Pace
CC:
Paul Wolfowitz
Gen. Myers
Steve Cambone
FROM:
Donald Rumsfeld
SUBJECT:
Requirements System
As Chairman of the JROC, please think through what we all need to do, individually or
collectively, to get the requirements system fixed.
It is pretty clear it is broken, and it is so powerful and inexorable that it invariably
continues to require things that ought not to be required, and does not require things that
need to be required.
Please screw your head into that, and lets have four or five of us meet and talk about it.
Thanks.
Predictably, a considerable amount of activity followed (led by the decision to banish the word
requirement from the new process). This effort resulted in three principles that form the
foundation of JCIDS:
Deriving needs from a joint perspective, from a new set of joint concepts. The JCIDS
architects recognized that a new set of documents would be necessary to link strategic
ends to warfighting means. Furthermore, these documents would have to go beyond
doctrine, which are beliefs about the best way to do things with existing resources. The
joint concepts would have to challenge existing approaches and provide impetus for
improvement. Also, these documents would broaden the strategic view and force the
DOD to consider the needs of a variety of military problems, not just one or two
canonical conflicts.
Having a single general or flag officer oversee each DOD functional portfolio. One
problem with the existing requirements process was that no one organization had
responsibility for knowing what DOD was doing in, say, command-and-control systems.
As a result, senior DOD decision makers became involved only after an unacceptably
small set of options were defined. In JCIDS, each Functional Capability Board (FCB) is
directed by a general or flag officer who has that responsibility.
By the summer of 2003, JCIDS was up and operating. The FCBs began functioning, and the
production of joint concept documents began.
We do not claim that this transition has been straightforward or painless. CJCSI 3170.01, the
governing instruction for JCIDS, has been revised six times in its first three years. Also, debate
continues on what exactly a capabilities-based approach is, what task structures should be used,
5
the role of future planning scenarios and current operations plans, and the exact relationship
between JCIDS and the formal DOD acquisition system.
One early principle that has not survived is the idea of mandating integrated architectures as the
sole basis for capabilities assessments. Early JCIDS work asserted that the emergent DOD
Architecture Framework (DODAF) [DODAF Working Group, 2004] should be the basis for
functional assessments, interoperability assessments, and assessment of mission areas. The
DODAF, which provides a variety of systems engineering tools, was originally oriented
towards command, control, and communication interoperability. It was later expanded to
portray a variety of military functions, and JCIDS originally featured architectures as a central
mechanism.
Architectures are useful (and probably essential) once you have decided what to do, as they
provide a framework to help determine how to do it. JCIDS capability assessments, however,
tend to be concerned more with what to do, and a DOD study concluded in July 2004 that
architecture development was more appropriate after a JCIDS assessment was complete [J-8,
2004]. It is true that architecture production is still a fundamental principle of JCIDS, and many
key JCIDS documents must contain certain DODAF products. Nonetheless, producing
architectures is not a requirement for a CBA.
More recently, JCIDS implemented a lexicon for capabilities. This lexicon, called the Joint
Capabilities Areas (JCAs) [J-8, 2005], divides joint operations into a hierarchy that is not tied
to particular force elements or platforms. JCAs have been given high-level endorsement
[Secretary of Defense, 2005] and are becoming common in JCIDS analyses.
JCIDS is both ambitious and evolving. Consequently, executing most JCIDS processes requires
flexibility and creativity, because the DOD must continue to change to fully implement a
system based on the principles listed above. Regardless, it is important for you to understand
the aims of JCIDS. For further information on the motivations behind JCIDS, an excellent
source is the Joint Defense Capabilities Study Final Report [Aldridge, et. al., 2003].
DOD Strategic
Guidance
CPD
CDD
Integrated
Architectures
Functional
Needs
Analysis
ICD
JCD
Ideas for
non-Materiel
Approaches
(DOTMLPF
Analysis)
Analysis
of Materiel/
non-Materiel
Approaches
Ideas for
Materiel
Approaches
Approach N
Approach 2
Approach 1
CBA analysis
Figure 2. JCIDS analysis process.
DCR
Existing
guidance
FAA
What are
we talking
about?
How good
are we at
doing it?
FNA
FSA
What
should we
do about it?
The FAA synthesizes existing guidance to specify the military problems to be studied. The
FNA then examines that problem, assesses how well the DOD can address the problem given
its current program, and recommends needs the DOD should address. The FSA takes this
assessment as input, and generates recommendations for solutions to the needs.
Of course, these simplified inputs and outputs decompose into much more complicated sets of
products, and the analyses themselves require much more examination. The point is, however,
that a JCIDS CBA is not really different than any other analysis. It must specify the issues,
estimate our current and projected abilities, and recommend actions.
Note that your CBA may not include an FSA. The current trend in JCIDS for jointly-initiated
assessments is to do an FAA and an FNA, and then produce a Joint Capabilities Document
(JCD), which is sent to the JROC. If the JROC opts to act on the needs identified in the
assessment, they will assign a sponsor (typically a Service), to do one or more FSAs.
This also means that you may do a CBA which consists of nothing but an FSA. In these cases,
you will have to rely on someone elses FAA and FNA, to include repairing any defects and
reacting to subsequent changes in guidance.
documents, but is a synthesis of what has been directed to date, and also reflects our experience
with DOD mission area assessments. The types of CBAs in this taxonomy are:
The reason we suggest this taxonomy is that the six different types have different implications
for what the CBA must emphasize. For example, a CBA based on an actual operational failure
will likely spend little (or no) time in the FAA, as the what has already been demonstrated.
Conversely, a CBA based on a perceived need, such as a study result, will still require
considerable work in the FAA. The fact that the needs are forecast, and not demonstrated,
indicates that there is still some question about the exact definition of the problem, its scope, or
whether the stated problem really is a problem.
CBAs aimed at unified examinations of mission areas support a primary objective of JCIDS. If
the mission area is not wholly within the province of a particular community (particularly a
Service), then it is likely that either multiple communities are addressing the problems without
much coordination, or no one is addressing it.
CBAs may also examine the utility of a proposed concept or solution. While this seems
contrary to the fundamental principle of having needs come from top-down concepts, the fact is
that good ideas can come from below, and may have much broader application than the
originators thought. Seabasing, which potentially addresses a wide range of military problems,
is the best current example of this type of CBA.
A CBA may be concerned with a broad look at a functional area. Again, this seems contrary;
for example, the Joint Chiefs of Staff (JCS) Tank has commissioned a CBA on joint
distribution, but JCIDS already has a Focused Logistics FCB whose entire mission is assessing
joint logistics. The answer is that the CBA should take a crosscutting look at the function, to
include assessing its affects on a variety of military problems. FAA scoping is very important in
this type of CBA, because attempting to examine the impacts of one functional area on
everything else is unmanageable.
Finally, this paper also discusses something called a Quick Turn CBA (Section 9). The
functional taxonomy of CBAs we list above still applies to these types of CBAs. However, we
discuss them separately because they normally must be executed in 30 to 60 days, and the
tremendous time compression requires modifying the approaches we recommend for less
frantic efforts.
are written by FCBs, producing joint concepts is not a JCIDS function; it is a separate process,
managed by JCS/J-7 and US Joint Forces Command/J-9.
Consequently, this paper will not describe joint concepts and their production in detail.
Nonetheless, you will have to be familiar with the applicable joint concepts, particularly one
called the Joint Integrating Concept (JIC). To date, all JROC-directed CBAs have been
accompanied by a JIC, which was tasked at the same time as the CBA (in the case of the
forcible entry example in Section 1, the document at that time was called a JOC). So the JIC
has fundamental relationship to a CBA.
But what exactly is a JIC?
Recall that doctrine is a statement of beliefs about the best way to do something with the
resources we currently have. Joint concepts, however, are ideas about how something might be
done with resources we may not have yet. The Chairman of the Joint Chiefs of Staff (CJCS)
issued a series of Joint Vision documents through the 1990s as a means to drive progress in
the DOD; joint concepts documents have now assumed that role.
So, a JIC is a statement of how something might be done; in particular, it states how we would
like to do that thing in the future. Furthermore, the JIC is the lowest level of a family of concept
documents collectively called the Joint Operations Concepts. These are shown in Figure 4.
So, the integration the JIC performs is to use a set of general operational and functional
concepts to produce a description of how some specific operation or function might be done in
the future.
This may lead you to believe that the JIC that comes with your CBA will contain complete
guidance, and your job will be reduced to executing the quantitative assessment. After all,
CJCSI 3010.02B, Joint Operations Concepts, says:
JICs are narrowly scoped to identify, describe and apply specific capabilities,
decomposing them into the fundamental tasks, conditions, and standards required to
conduct a CBA Additionally, a JIC contains an illustrative vignette to facilitate
understanding of the concept. [2005, p. A-3]
Broad
statement
of how to
operate in
the future
Broad
description
of joint
force
operations
Joint Operating
Concepts (JOCs)
Joint Functional
Concepts (JFCs)
Joint Integrating
Concepts (JICs)
Broad
description
enduring
joint force
functions
Description
of narrowly
focused
operations
or functions
10
Unfortunately, this has not been the case. The first set of JICs (Global Strike, Joint Logistics
Distribution, Joint Command and Control, Seabasing, Integrated Air and Missile Defense, Joint
Undersea Superiority, and Joint Forcible Entry Operations) range from immensely detailed lists
of necessary tasks to Clausewitzian discussions of military operations, and nearly everything in
between.
We are not disparaging the authors of the current set of JICs. On the contrary, it is
extraordinarily difficult to write something that induces progress without making the document
either fanciful or vacuous. Formal joint concept development was established at the same time
as JCIDS, and has been subject to the same growing pains.
The most honest advice we can give you is that first, you should participate in (or at least
follow) the development of the JIC, and second, you must respect what the JIC says in the
execution of your assessment. But, the JIC will not be a statement of work for your CBA. The
concept development staffing process will ensure that the JIC contains at least the elements
cited above, but you will have to sharpen and augment the JIC to conduct your analysis.
You may find yourself doing a CBA that does not have a JIC. In this case, you will have to
provide what the JIC provides, particularly the statement of the military problem and the
specific operation or function being considered. Since a JIC does not exist, you will likely have
to come up with justification from some strategic guidance document (see Section 2.3) that
describes the operation or function and the need to examine it. You can fill in the other
elements that a JIC would contain in the course of doing your FAA.
11
contingency plans. It is not a well-known document, and many members of the DOD have
never heard of it. Nonetheless, it is extremely important, because it defines the current
challenges that the DOD must plan for, and gives advice on priorities.
National
Interests, Goals
Priorities
National
Security
Strategy
Integrating
Instruments of
National Power
Americas
Current
Position
National
Security
Directives
Persistent and
Emerging
Challenges
Assumptions
Geo-political,
Geo-economic
Space
National Defense
Strategy
Security
Objectives
How We Will
Accomplish
Objectives
Implementation
Guidelines
Key Operational
Capabilities
National
Military
Strategy
Attributes and
Principles
Operational
Priorities
Full-Spectrum
Dominance
Risk
Assessments
Capstone
Concept for
Joint
Operations
Political-Military
Space
Operations
Space
Joint
Operations
Concepts
Joint
Operating
Concepts
Joint
Functional
Concepts
Joint
Integrating
Concepts
Battle
Space
While the UCP and CPG are purely operational documents, the Strategic Planning Guidance
(SPG) provides the bridge between the Defense Strategy and the planning, programming, and
budgeting world. The SPG is a classified document signed by the Secretary of Defense, and
gives overarching direction on strategic and budget priorities. In particular, the SPG (which is
published biennially) is a central source for guidance on where the DOD should reduce risk or
accept risk.
The SPG leads what is now known as the Enhanced Planning Process, which results in a set of
documents known as the Joint Programming Guidance (JPG). The JPGs provides guidance
to the military departments and defense agencies on building their program proposals, and may
contain specific program information that will be useful for your assessment.
Another bridging document is the Transformation Planning Guidance (TPG). The first TPG
was signed by the Secretary of Defense in April 2003, and was designed to reinforce the current
Administrations interest in substantially changing the direction of DOD. As with the joint
concepts documents, it describes what the DOD might be and sets directions for change. In fact,
the 2003 TPG directed the preparation of the family of joint concepts documents.
The final document that you should examine is the most recent published Quadrennial
Defense Review (QDR) report. The QDR is mandated by law and requires the DOD to
undertake a comprehensive examination of its strategy and performance. To date, QDRs have
12
been conducted in 1997, 2001, and 2005, and each of those reviews has resulted in substantial
strategic and program changes. Many of the ideas that appear in the documents above first
appeared in a QDR report; for example, the notion of a capabilities-based approach (which
ultimately led to JCIDS) was first described in QDR 2001. Also note that a QDR report may
substitute for that years SPG or TPG.
You still may not be convinced that you need to study these documents for a CBA. If you are
not, heres a short list of very compelling reasons to study them.
To find an organizing framework. The mission or function you are assessing probably
covers an enormous range of potential military operations. The documents above offer a
number of organizing frameworks (particularly the security environment framework in
the Defense Strategy) that will help you make your assessment manageable.
To identify overarching priorities. The SPG in particular has been quite aggressive in
specifying areas where the DOD should improve, and areas where the DOD can take risk.
If these documents offer such advice on areas related to your CBA, you should use them.
To help set performance standards. A central issue you will have to settle in your CBA
is setting the criteria for the assessment of how well DOD does (or should) perform a
mission or task. These documents contain authoritative advice on such criteria, such as
friendly losses and collateral damage.
13
Adversary expertise: who can credibly estimate the range of options open to an enemy?
14
Analytical ability: who has the tools, techniques, and track record that can support my
CBA?
Bureaucratic agility: who knows how to navigate among all the competing interests
safely?
Communications ability: who can communicate the results with brevity, clarity, and
believability to senior decision makers?
Cost estimation: who can forecast the costs of the options of interest?
Study design: who can build a study plan that satisfies the tasking, provides appropriate
linkage to the strategy, and is executable in the time allotted?
Study management: who knows how to organize and execute the CBA?
Technical knowledge: who knows what technology options are realizable as CBA
solutions?
Policy knowledge: who knows what policy options are realizable as CBA solutions?
Too often, we believe that to do a successful CBA on, say, integrated air and missile defense,
we just need to unearth a set of experts on air and missile defense doctrine, and the rest will
take care of itself. Unfortunately, history has shown this to be untrue; you will need all ten of
the types of expertise shown above, and you will not find all of them in one person.
Consequently, you need to explore the community and find out who is good at these things. If
they are available, you should note that for the eventual composition of your study team. If not,
you get advice from them on how to execute your assessment.
The difficult part of this job is finding out who is really good, as opposed to those who merely
claim to be good. The answer is not earthshaking; as you would with, say, a home improvement
project, you have to gather and check references.
This is where the literature search can come in very handy. If a study is deemed successful and
induces the DOD to make a substantial move, then many things went right. So find out who
made things go right. You can combine this search with your literature search, and you will end
up with a list of both useful study products and real experts.
This approach also helps you avoid being overwhelmed with people who find out you are
looking for help. Important studies attract many potential providers, but you cannot allow
yourself to be consumed with unsolicited proposals in the preparation phase. You have to lead
the search for expertise.
15
Organization
A
X
X
X
X
Area 1
Area 2
X
X
X
X
X
X
X
X
In addition to the types of expertise shown above, youll have to designate someone to function
as your deputy essentially, someone who stands in when you are unavailable.
Within the expertise chart, there are many choices of providers. You may use some
combination of:
informal advisors who are neither in the government nor are on contract to the
government.
We cannot give you a precise answer on whom to use as providers, because different CBAs
require different mixes of skills. We can, however, offer some considerations.
First of all, government organizations can and are often redirected to other higher-priority tasks.
If you have a commitment from a government organization to provide help for your CBA, then
your chain of command will have to enforce it. And, since CBAs are often viewed as long-term
efforts that can tolerate delays, redirections away from CBA work are common.
Also, recognize that your CBA will largely be an additional duty for anyone helping you in an
external organization. The original JCIDS vision was that the JIC would divide the topic into
functions, those functions would be passed to each owning FCB for assessment, and the results
would be consolidated by the CBA lead. Unfortunately, the CBAs that have attempted this
16
method have found that it doesnt work very well. First, FCBs have a large routine workload,
and second, these assessments arent easily partitioned; they really have to be done by an
integrated team.
Of course, you can use contractors. In this case, you have a formal contract, along with formal
avenues for redress if the work is unsatisfactory. But, to use contractors, you will have to get
funding, and allocate time to the competitive bidding process. Also, getting someone on
contract tends to take at least 60 days. More importantly, you must also ensure that any forprofit contractors you use do not have a financial interest in the outcome of the analyses. CBAs
result in findings that are acquisition-sensitive, as they prioritize needs and inform future
budgets and investments.
Another option is to use Federally Funded Research and Development Centers (FFRDCs) or
University Affiliated Research Centers (UARCs). These are not-for-profit organizations that
have a special relationship with DOD, and you can hire them directly. Understand, though, that
public law limits the amount of support FFRDCs can provide to DOD, so FFRDC man-years
are formally allocated. If you have identified an FFRDC as a source of expertise you want to
employ, you will have to get a man-year allocation.
You should try to get your core team together at the start. In several CBAs, certain elements of
the study team were not brought into the CBA until considerable work was done, due to
funding delays or the feeling that they werent needed early on. This is an enormous mistake. If
you want a team that functions well, they have to be in on the whole effort, from end to end.
As a final note, when we say study team, we are talking about the small, core team that takes
direction from you, and you alone, on what will be done and when. We are not referring to the
larger working group that you will form to deal with Combatant Commands, Services, Defense
Agencies, and other communities. Representatives from these groups will work with you, but
they report to other chains of command and have a primary task of monitoring your effort for
their organizations. This is not a pejorative distinction; these organizations have a right to know
what you are doing, and working group members are a valuable source of input. But, they are
not a part of your core team.
This team will accomplish the bulk of the work in the CBA. They will do all the fundamental
analysis and all the integration, and will generate all the supporting information for your
presentations. You will meet with them frequently, but remember that you are commanding, not
controlling. Unless you are a quantitative or policy analyst yourself, it is likely that your core
team is doing things that you are not trained to do. So resist the urge to tinker with them beyond
obtaining what you need to know to coherently present their work. This team is your most
valuable resource, so do not waste their time (e.g., dont make ten people come to your
overcrowded workspaces twice a week when you could go to their offices once a week).
On the other hand, you have to give your team overarching direction. You have to ensure that
they are helping you stay on track, that they are addressing the issues and not becoming too
focused on a narrow set of scenarios or analytical tools, and that their work reflects external
changes that you bring to them. You also have to ensure that they give you an accurate and
usable set of project management options when you have to react to (inevitable) schedule or
scope changes.
Finally, if your CBA is looking at issues at higher classification levels, you have to ensure at
the outset that the critical members of this team either have or can get the appropriate
clearances. Several CBAs have had significant problems with clearances, so much that they led
to delays of up to one year.
are monitoring your CBA and reporting to their organizations if it appears the CBA
supports or refutes any of their organizations equities (enforcers);
have been directed to slow down your CBA so that it doesnt interfere with initiatives
being promoted by their organizations (saboteurs);
are waging personal campaigns to cure certain areas they believe to be defective in DOD,
and view your CBA as a means to those ends (zealots);
give long philosophical speeches that may or may not make any sense, but prevent your
meetings from accomplishing anything (bloviators);
are attending your meetings because their organization has no idea what else to do with
them (potted plants);
are convinced that your assessment is a cover story for a secret plot to destroy their
interests (conspiracy theorists);
are attending your meetings to as a means to generate work for their organizations (war
profiteers);
have been directed to ensure that your CBA doesnt result in additional work for their
organizations (evaders); and
18
are forthright and competent individuals who want to get you relevant information and
useful advice that will help you succeed (professionals).
In a perfect world, your working group would consist only of professionals. But it will not.
Furthermore, you cant pick your working group; they are ambassadors chosen by their owning
organizations, and they can only be replaced under extraordinary circumstances. So, what can
you do?
First, you and your core team should stay several weeks ahead of the working group. A
CBA cannot be conducted as a journey of discovery in which you and a mixed crew
(which may contain mutineers) simultaneously discover what is around the next bend of
the river. You must plot the course.
Second, you should provide read-aheads to your working group a respectable time prior
to any meeting, set the meeting agenda and duration, and adhere to both with no
exceptions. Furthermore, you should minimize the number of meetings, and work with
individual organizations individually, on individual issues, as much as possible.
Another issue with working groups is that the comments of a working group participant, in
general, do not represent his organizations formal position. So, for certain critical questions
(large data calls, scenario selections, CONOPS solicitations) you will have to staff the
questions formally. JCIDS CBAs are littered with cases in which the study lead thought he had
concurrence, but was overturned later with a formal response. What you have to do is pick the
critical requests for which you want ironclad responses, and ask for a formal organization
position. If you are on the Joint Staff or are working within the FCB structure, the Joint Staff
Action Package (JSAP) is the best mechanism for doing this.
You have to provide a mechanism for integration among the FCBs (or affected functional
organizations if you are working on a Service or COCOM CBA). As we mentioned in Section
3.1, the original notion of partitioning the CBA among FCBs has not worked well. Nonetheless,
you still need to know what is going on in those functional areas after all, the FCBs are
responsible for knowing the DOD portfolio in their functional area. Furthermore, the FCBs
need a way to examine your analyses of their function as it relates to the CBA.
Probably the best way out of this dilemma is to have people on your core team who have
relationships with the relevant FCBs and can accurately represent and analyze their capabilities.
This doesnt create more work for the other FCBs, and provides the visibility they need.
Finally, you will also have to consider what sort of governance your CBA will have. If you are
in a FCB working on a JROC-directed CBA, you will likely use FCB-JCB-JROC oversight
procedures for staffing. There may, however, be different oversight groups that you must
include, or you may be doing a CBA within a Service or Combatant Command. In that case,
you must decide what your governance structure will look like.
Oversight is a tradeoff between getting and maintaining senior leader support and being overmanaged. Consequently, you do not want to set up a huge structure. Most studies work quite
well with a single working group and a general officer steering group; some add an additional
integration group at the O-6 level. You should try to avoid having more than two levels of
oversight above your external working group.
19
You may think its a good idea to limit access to your site via passwords, but this really doesnt
work; it just means that someone who wants your products has to be a bit cleverer about getting
it. Site passwords do not prevent someone else from using the password, nor do they prevent
redistribution. So dont bother. If you dont want something distributed, dont put it on the site.
Doctrine Review
Literature Review
Why
This
CBA?
Expertise
Search
Strategic Guidance
Review
Final Team
Selection
Study Plan
Preparation,
Approval
FAA
FNA
Present
Results
Working
Group
Formation
FSA
Present
Results
Present
Results
Quick Look
The precedent relationships in Figure 7 follow what we have discussed so far. Answering the
question of why you are doing your particular CBA is the starting point, and we do not
recommend that you proceed unless you have that answer. The strategic, doctrine, and literature
reviews can all start in parallel with the expertise search, but you should finish the strategic
review prior to filling out your core team. In keeping with the edict to stay ahead of your
working group, you should have the study plan (including organization and working
relationships) drafted prior to the first meeting of the working group.
Note that you will have to get some help early on if you want to try to do the various review
tasks and search tasks in parallel. Hence, the task is to complete selection of the core team prior
building a study plan and forming a working group.
You may have a CBA that depends on a JIC, but you may not be able to influence the progress
of the JIC. Yet, experience has shown that you dont need to wait until the JIC is complete to
do a substantial amount of work. Figure 7 shows how much can be done prior to final JIC
approval. The other shaded process (the quick look) is a task limited to your core team, and will
be discussed further in Chapter 5. For scheduling purposes, you should try to complete the
quick look prior to the completion of the FAA.
Because of the wide range of CBA types and topics, it is difficult for us to recommend typical
times that it should take to complete the tasks shown above. Also, some tasks may have already
been completed. For example, the JIC writing team may have already summarized all the
relevant doctrinal literature (and this would be another reason to participate in JIC
development).
20
JCIDS specifies three formal decision points: the FAA, the FNA, and the FSA. So, you will
staff results through JCIDS channels at least three times. In addition, if the JROC directed your
CBA, then your study plan must be approved as well.
The other major decision points are concerned with requests that you want to staff formally,
such as data calls, threat assessments, and so on. You will normally conduct these within an
FAA, FNA, or FSA, so we will cover those in Chapters 6 - 8.
When CBAs were first established, the prevailing opinion was that they should take 90 days (30
days for each of the major analyses). Unfortunately, none of the JROC-directed CBAs done to
date has even come close to finishing in 270 days.
Here are some reasons for this.
JIC delays. The JIC is usually commissioned at the same time as the CBA, so the CBA
cant really start until the JIC has least been drafted. In reality, several CBAs have
completed the FAA and gone into the FNA before the JIC was finally approved (NOTE:
if this happens, it is possible to execute the CBA and JIC simultaneously, but such an
arrangement would have to be negotiated through the JROC, and your CBA will have to
absorb the overhead of frequent communication with the JIC writers).
False starts. Several CBAs were well down the road when they discovered that they
either had an unmanageable scope, the wrong team, or the wrong methodology.
Backtracking and fixing these problems caused considerable delay.
Staffing results through JCIDS. Suppose that your CBA was JROC-directed. Then the
CBA study plan, FAA, FNA, and FSA must be approved by the JROC. This means that
each must be presented to the lead FCB, then the Joint Capabilities Board (JCB), and
then to the JROC. If each of these takes a week to schedule and execute (including the
inevitable prebriefs and resulting modifications), then you will spend at least 4 x 3 x 7 =
84 calendar days just getting results presented and approved and this does not include
staffing. And, since each step determines the next step, its risky to start the next step
without approval of the previous step.
Command redirection. CBAs tend to outlast the four-stars that commissioned them, and
their replacements may direct (and have directed) substantial changes to the scope and
emphasis of the assessment.
Clearance problems. As mentioned in Section 3.2, several CBAs have had significant
delays because of difficulties getting clearances for study team members.
Regrettably, presenting results for approval will take far more time than you ever thought. As a
result, you have to schedule so that your team is still producing while you are bringing forward
results for approval. This is a challenge, because the JCIDS analysis process is entirely
sequential. We recommend a quick look (Chapter 5) as a way to mitigate some of these delays.
Finally, just because history has shown that these assessments tend to go slowly does not allow
you to execute at a glacial pace. You will have to push the effort along. Otherwise, you will be
in serious danger of delivering answers long after the key decision windows have closed, and
the four-stars that were interested in the topic have already made up their minds without being
informed by anything that you did.
21
Scenarios considered. We cannot say that we actually have a capability unless we test it
against various adversaries and operating conditions. The sample of adversaries and operating
conditions in other words, the scenarios used are a component of the scope of an
assessment.
Functions considered. It is difficult to find a military operation that does not employ
virtually all functions of the DOD, from exercising space control to providing physical fitness
facilities. But, not all of the employed functions must (or should) be analyzed in a CBA.
Types of solutions considered. In some cases, the type of solutions allowed by policy,
existing treaties, and so on may narrow the scope (e.g., space-based weapons may be ruled
out at the outset). Also, if you have a solution-oriented CBA such as Seabasing, your
assessment is limited to assessing the alternatives within, and utility of, that concept.
Resource limits. While resource limits have not been imposed on any CBA done to date, it is
entirely possible to scope a CBA by stipulating limits on solutions, such as requiring that the
FSA output must present options that do not require additional manpower or funding. Note
that recent changes to JCIDS require that the FSA consider alternative CONOPS that use
non-materiel solutions [JROCM 062-06, 2006].
Planning horizon. This is the time period that the CBA is considering, for both adversaries
and potential solutions.
22
Of these six areas, the JIC will directly address desired capabilities, functions, and the planning
horizon. Additionally, the JIC will offer an illustrative scenario. You, however, will have to determine
the rest.
A challenge in specifying these elements in a study plan is that they constitute most of the what of
the CBA which is a major output of the FAA. Yet, this direction says that study plan must contain
the description of what will and will not be considered. So how can you reconcile this guidance?
The answer is that there is nothing (at present) that says you have to complete the study plan prior to
starting the FAA. As a matter of fact, the current guidance only implies that the study plan must be
approved prior to your presenting FAA results, so you can proceed as illustrated in Figure 7.
Consequently, we will discuss ways to do the scoping in Section 6.
Also, you can define the FAA as the public part of your study, that is, the point at which you form
your working group and solicit input. This allows you to do a lot of research that you need for both
the FAA and the study plan, and permits you to put out a study plan prior to starting the public
portions of the FAA.
The study plan should not be an enormous document. Shorter is better, and you should aim for a plan
that is 15 pages or less. There is no set format, but the following outline is a composite of CBA study
plans done to date.
References. List DOD guidance that directly affects your CBA, plus applicable joint concept
and scenario documents.
Purpose. This contains a single paragraph that states the purpose and contents of the study
plan.
Background and Guidance. Summarize the answer to the why this CBA? question and
quote DOD guidance relevant to your CBA.
Objectives. Describe the type of CBA you have and the desired products.
Scope. Discuss the six elements of scoping as they apply to your CBA, and refer back to the
relevant DOD guidance to support your scope. This is the most important part of the study
plan, so you will have to devote some space to proving that your scope is correct.
Methodology. Leave yourself room to adjust in this section. Be specific about how you
intend to do the FAA, but allow for options in the conduct of the FNA and FSA.
Organization and Governance. It is not necessary for you to describe how your core team
will function; this section should instead concentrate on how you will work with external
organizations, to include your web site and coordination procedures. You should also
document the governance structure of your CBA, including all oversight committees and
general officer steering groups.
Projected Schedule. Keep this short, and limit it to major staff actions and milestones (FAA,
FNA, and FSA) that you already know about. Say that an updated schedule will be
maintained on your web site.
Responsibilities. List what you want from external government organizations. For now, you
should be able to specify which organizations should provide representatives to your working
groups. If you are planning on relying on external government organizations for major parts
of your assessment, list them in the study plan and also refer to them in the methodology
section.
Remember that the study plan contains your initial proposals for how you will proceed. It is not an
ironclad contract, because bodies such as the JROC that commission CBAs retain the right to redirect
23
you. Since it is likely that you will modify the scope of your CBA during its execution, changes are
allowed, and the study plan is really a live document. But, to get approval to start, you have to
demonstrate that youre ready to start. The study plan is the basis for that decision.
Clearly, you can maximize your flexibility by minimizing the number of activities you commit to in
the study plan. Unfortunately, your desire for managerial flexibility is at odds with the leaderships
desire to see evidence that you have an approach that is workable. Consequently, you should present
your initial thoughts on the following in the methodology section of the study plan.
Methodology approaches. You probably have some idea of the analytical tools and
techniques you will use for your assessment. While this is not a primary element of scoping,
the choice of methodology is a direct consequence of the capabilities, scenarios, and
functions you want to evaluate. This is important enough that you should cover at least what
level of analysis you expect to conduct (see Section 7.2).
Measures of effectiveness (MOEs).1 While we show these as an FAA output, you should
offer an initial list in the study plan. If you have a JIC, it will give you some advice on
measures. Otherwise, you can derive some measures from attributes listed in the applicable
JOCs and JFCs.
Technological and policy opportunities. Two central reasons for commissioning a CBA are
first, to examine areas where we need to improve, and second, to examine areas where
improvements are possible due to technological or policy opportunities. If the latter is the
case with your CBA, you should mention that in the study plan, and list the specific
technological or policy opportunities.
The quote that begins this section makes it very clear what the JROC wants from a CBA study plan.
To be accepted, the study plan must communicate that:
Format and the order of the sections is not a concern; clarity, brevity, completeness, and believability
are.
Finally, your study plan may not need to address an FAA, FNA, and FSA. If a COCOM commissions
a CBA based on its assigned missions, it may have already accumulated enough information to
constitute an FAA. Also, the JIC may contain sufficient information that no additional FAA work is
necessary. In that case, the study plan can reference that work and concentrate on the FNA and FSA.
Also, there may be no need to do an FSA, because the intent of the assessment may be to develop
information to support a joint experiment, or you may know at the outset that the FSA will be done by
a different organization. The point is that there are several possible ways in (and out) of a CBA, and
the general format we offer supports all of them.
In this paper, we define an MOE as a measure of the degree to which we can meet an operational objective, as
distinguished from a Measure of Performance (MOP), which is a measure of how well a system or force
element performs its functions (e.g., survivability or lethality).
24
Bounding the effectiveness of current doctrinal CONOPS. How good are we now?
Suppose we currently attack enemy amphibious ships using certain types of platforms,
25
weapons, and tactics. How good could we become if we updated the platforms and
weapons? And what updates are fiscally and technologically possible?
Bounding options open to the enemy. What sorts of things can the enemy do to prevent
us from achieving the desired effects? We note that current operations in Iraq show just
how adaptive and innovative an enemy can be; no assessment done prior to Operation
IRAQI FREEDOM predicted how much the use of improvised explosive devices would
disrupt our stabilization operations.
Bounding investments in the capability areas. The CBA will be assessing a set of
capabilities introduced in the JIC. How much has DOD typically invested in these areas?
How much more (or less) is it likely to invest? If the DOD decided that it wanted to
maximize capability in this area, what would the maximal rational investment be?
Bounding alternative CONOPS and operating policies. Are there things we dont do
right now that we might do? For example, we obeyed the antiballistic missile treaty
negotiated with the Soviets for many years after the Soviet Union dissolved, but then
withdrew from it and began fielding national missile defense systems. Are there similar
alternatives available that could substantially change how we achieve certain capabilities?
The notion of bounding is very important for your assessment. Many DOD studies spend far
too much time refining baseline CONOPS, performance estimates, investment trends, and
policy limits. Such studies ultimately produce answers with considerable depth but no breadth,
and investigate very few alternatives. Since the quick look is entirely yours, you are free to
search for plausible situations that may result in radically different views of the military
problem being considered in your CBA. You do not have to have an external working group
filtering what you examine, nor do you have to seek concurrence from anyone. It is just you
and your core team.
26
diary of how the assessment proceeded from wide quick look bounds to progressively more
focused recommendations. Again, this is a product for you and your management, and there is
no reason to staff or distribute it outside of your core team.
Ideally, you would finish the quick look prior to starting the FAA, because then you would
have a preliminary analysis in hand to help with scoping. But schedules may forbid that, so the
latest completion date for the quick look should be just prior to the formal staffing of the FAA.
If you delay the quick look longer, you will have to make decisions on CBA directions without
a bounding analysis of the potential outcomes, which is risky.
So, to further explain the precedence relationships in Figure 7, we recommend that you begin a
quick look with your internal team as soon as possible. The quick look can inform the study
plan, but it is not a prerequisite; you can work on the quick look and the study plan
simultaneously, but you should organize the quick look so that it first addresses the scoping
issues that the study plan must address. Finally, you should have both the quick look done and
the study plan approved prior to formal staffing of the FAA.
27
potentially different views. Also, remember that joint concepts are intended to drive progress,
so they may present views at odds with current doctrine. Reconciling these positions is formally
a JIC issue, but be aware that conflicts may not be settled when you begin your CBA. In some
CBAs, the question of what military problem was being studied persisted until the end of the
FNA.
Scenarios provide the means to assess the capabilities associated with the mission
area. We cannot declare whether or not the DOD has a capability without testing it
against real enemies with real objectives, forces, and geography. Otherwise, anyone
could simply assert the presence or absence of a capability without providing any proof.
Scenarios provide a way to connect the assessment topic to the existing strategic
guidance. A few years ago, many analysts were claiming their products were
capabilities-based because they posited imaginary enemies operating in synthesized
environments (e.g., assisting in the defense of Puceland from an invasion by the evil
Mauvians). While such artificiality provided some degree of scenario agnosticism,
imaginary forces, objectives, and geography had to be specified. Worse, since these
warring factions didnt actually exist, there was no way to connect them to very specific
strategic guidance on achieving aims in the real world.
Scenarios provide a way to test the concept against the breadth of the defense
strategy. The original aim of the capabilities-based approach was to broaden our
strategic perspective by considering a wider range of military situations. By choosing a
good scenario sample, you can assess the concept against a wide range of relevant
situations and comment on its overall applicability. Also, your assessment will be insured
against sudden swings in priorities (e.g., the shift to the Global War on Terror after
September 11, 2001).
Scenarios provide the spectrum of conditions for the FAA. Scenarios yield a range of
enemies, environments, and access challenges, all of which constitute conditions.
While it is crucial to choose the range of scenarios wisely, scenario selection is less difficult
than you might think. For example, the current defense strategy divides all future security
challenges into four categories:
traditional challenges are posed by states employing recognized military capabilities and
forces in well-understood forms of military competition and conflict;
29
irregular challenges come from those employing unconventional methods to counter the
traditional advantages of stronger opponents;
catastrophic challenges involve the acquisition, possession, and use of (weapons of mass
destruction (WMD) or methods producing WMD like effects; and
disruptive challenges that may come from adversaries who develop and use
breakthrough technologies to negate current U.S. advantages in key operational domains
[National Defense Strategy, 2005, p. 2].
Furthermore, all the scenarios in the DOD Analytical Agenda have been mapped to one or more
of these categories. So, if you choose this framework, you should pick at least four scenarios,
one for each future security environment.
This framework is not the only one available, however. The strategy also offers four strategic
objectives:
establish favorable security conditions [National Defense Strategy, 2005, pp. 6-7].
One framework may suit your CBA better than another. Regardless, you must resist the urge to
pick one scenario and devote all your time to it, under the assertion that if we can do this, we
can do anything. This sort of Maginot Line reasoning has been proven untrue so often that the
idea should never come up. But this notion however flawed appears to be ineradicable.
Instead, force your CBA to inspect a wide range of situations (including enemy options within
those situations), and reduce them to a set that provides a good sample, one that covers the
breadth of the defense strategy.
The idea of sampling is very important in scenario selection. An operation such as forcible
entry could be conducted in a very wide range of environments, and you will not have time to
analyze all the interesting situations. Instead, youll have to pick a set of criteria and use those
criteria to select a manageable but comprehensive set. There are quantitative methods available
to help choose a sample, so you should have a member of your team that has these skills.
One useful task for the quick look is to have your core team examine all of the Analytical
Agenda scenarios (there are now over 100 of them, most with multiple variations) and suggest
combinations that are comprehensive and analyzable within the time and resources available.
This exercise will provide many insights in and of itself, and can be done in a day or two.
Scenario selection will likely be the first area of contention in your CBA. One shortcoming of a
capabilities-based approach is that it is very easy to define a situation that requires a particular
capability that is best addressed by a particular solution. Consequently, people promoting these
solutions will try to you drive towards those situations to the exclusion of all others. Your job is
to resist these sorts of hijacking attempts, and instead ensure that the CBA addresses the range
of military operations described in the strategy.
You need a formal position on the choice of scenarios. If you are doing a JROC-directed CBA,
scenario concurrence will come when you staff the study plan in JCIDS.
creating effects, and the abilities to achieve those effects are the capabilities that are the basis
of your assessment.
This leads to a straightforward examination of each scenario. For example, you may be
assessing integrated air and missile defense, and you are contemplating a typical regional
conflict. The overarching objective is to win the war, and a subordinate objective would be to
win the ground battle. To win the ground battle, we may choose to deploy ground forces, and
those forces have to be protected from enemy air and missile attack at their ports of
debarkation. Providing that protection is the capability that you are assessing; the scenario
provides the context.
What we have outlined above has long been practiced in the DOD under various names. The
best-known label is strategy to task [for example, see Pirnie and Gardiner, 1996]. We have
already advised you to investigate the higher levels of strategic documents for advice related to
your CBA topic; now we are advising you to connect the capabilities you are assessing to your
scenario sample.
This may seem like a deceptively simple step, but it may prove challenging. For example, an
alternative CONOPS for our example above might involve using allied ground forces to win the
ground fight, and not deploying any of our ground forces at all. Then, protecting deploying
ground elements becomes irrelevant, and providing the capability is no longer necessary. So, it
is important to recognize that capabilities are a function of both scenario and CONOPS.
Of course, we may choose to protect allied ground forces from air and missile attack, or we
may have to protect our deploying air and maritime forces. The point is, by tying the
capabilities to scenario objectives and a set of CONOPS, you eliminate the problem of trying to
assess in terms of capabilities de nusquam2.
Early writing on JCIDS often referred to critical capabilities, implying that there are other
capabilities that are not critical. To save yourself a semantic debate, merely state that in your
CBA, the critical capabilities are those effects that you have opted to assess in your scenarios.
Essentially, you are giving your working group a mission order. You want them to tell you how
they would do the mission, particularly:
what the sequencing of tasks and dependencies among tasks would be; and
31
If you have a concept-oriented CBA such as seabasing, you also want your group to give you
proposals on how the solution concept would be employed to provide the capabilities.
If you use the Analytical Agenda scenarios (and we highly recommend you do that), then you
can just refer your group to those documents. If you dont use these scenarios, youll have to
provide a great deal of information and justification, and it will probably prove to be far more
trouble than it is worth.
Even though you are analyzing needs 10-20 years in the future, you should ask for current
approaches to providing these capabilities. The reason for this is that you can get the Combatant
Commands involved. They likely have an OPLAN or CONPLAN available for scenarios
similar to yours, and they have thought through how to achieve the effects with forces that
actually exist.
Combatant Command staffs, however, do not like to comment on future capabilities.
Consequently, you will have to ask the Services for how they would operate in a future time
period if they execute their program. This will result in several different proposals; if you have
the time, you should conduct a joint war game to try to come up with a set of joint proposals.
You will have to ask for this information in a standard form that allows your working group to
document their proposals in an efficient way. You can use the JCAs and UJTLs (Universal
Joint Task Lists) to describe tasks and component responsibilities, and use a format similar to a
Gantt chart to show timing and task precedent relationships.
Now, you may have sufficient doctrinal expertise on your core team that you feel that you can
describe how we do it (or will do it) with your own resources. This is fine, but you still need
formal concurrence that what you have represents our current doctrinal thinking. The formal
concurrence is crucial. Otherwise, your CBA wont even have an agreed-upon starting point.
Note that you may receive a CONOPS that specifies a substantially different set of capabilities
than what you had in mind. In the air and missile defense example above, for example, one
proposal may call for using nothing but undersea assets, which again would make defending the
land and air domains irrelevant. So what would you do with such a proposal? Well, you would
keep it if there is evidence that it could be done, and assess it in the FNA and FSA. After all,
eliminating the need to provide a capability is just as much a solution as providing the
capability.
The art of collecting current approaches is that you must ask for enough detail to specify the
forces and timing associated with providing the capabilities, but not so much that the workload
and the output obscures the really important issues. Several CBAs have gotten bogged down in
task hierarchies and activity models to the point that they lost sight of the objectives of the
exercise. Generating reams of task tables does not help answer the larger questions.
32
psychological operations. You would probably opt to assume that those operations would
execute as planned, and not treat that particular function in your assessment.
Determining what functions and tasks you will analyze is an important part of scoping your
CBA. In general, you will not address a function or task when:
the function or task does not apply to your concept and your scenarios;
there is ample evidence that the function or task will succeed in your scenarios.
Several CBAs have relied on group approaches to determine the sets of critical tasks and
functions, ranging from simple voting to use of multiattribute decision theory. These
approaches have merit; among other things, they allow wide participation and can be executed
very quickly. But be warned that such techniques may prevent you from examining functions
and tasks that should be addressed. Prior to Operation Iraqi Freedom, no group predicted that
providing road security after cessation of major combat operations would be a critical task.
Unfortunately, it has proven to be just that.
The issues associated with group methods and the tendency for such groups to merely assert
existing approaches is exactly the reason you need a bounding analysis in the quick look. In
particular, you have to examine the range of potential enemy responses, and use that to help
decide which functions and tasks to assess. JCIDS also provides a formal means to get such
advice, as CJCSM 3170.01C requires the DIA to produce an Initial Threat Warning
Assessment if you ask for one [2006, p. A-10].
Finally, remember that existing structures such as the JCAs and the UJTLs were built to reflect
the DOD organization as it currently exists. Some CBAs have taken an approach similar to the
mythological Greek innkeeper Procrustes, who ensured his guests fit his beds either by
stretching them on the rack or chopping off their feet. If you have a new concept or a new
CONOPS, the existing task frameworks simply may not fit it very well, and you should use
some other depiction. Do not torture your analyses to fit the framework; otherwise you may be
killed, as Procrustes eventually was by Theseus.
33
You will not find (much less derive) a simple set of pass-fail criteria for all the scenarios,
objectives, functions, and tasks that you are assessing that you can defend. Perhaps the current
doctrinal standard for establishing a certain level of communications connectivity in a deployed
location is 72 hours. Why 72? Do we lose the battle if we are an hour late? Also, attempting to
write down standards for all possible tasks and functions in a microscopic fashion is
complicated, time-consuming, and may very well not add up to anything that will help the FNA
and FSA.
So what should you do?
Recall that JCIDS discusses things called attributes, which are a quantitative or qualitative
characteristic of an element or its actions [CJCSM 3170.01C, p. GL-5]. If you have a JIC, the
JIC will list a set of attributes; if you dont, you can refer to attributes listed in the relevant
JOCs and JFCs).
For example, the Seabasing JIC lists the following attributes:
degree of interoperability;
survivability; and
These attributes capture the concepts intent for judging the utility of alternative seabasing
concepts, and provide a starting point for you to derive your evaluation criteria.
But, notice that the seabasing list is not comprehensive. For example, there is no attribute that
says contribution to the warfight. Now, the implication is that a small, large capacity,
interoperable, survivable, accessible seabase that deploys quickly and employs and sustains
forces at high rates cannot help but improve the warfight. Nonetheless, there may be situations
where having a seabase does not significantly affect the outcome.
This is why you have to augment what comes in the typical JIC. You need to connect the
attributes to the scenarios youve chosen, and come up with appropriate metrics for the
attributes. Some metrics, such as those associated with survivability, are straightforward.
Others, like measuring interoperability, are much more difficult.
Also, the other strategic documents mentioned in Section 2.3 are likely to contain guidance that
will affect your choice of criteria. The SPG in particular tends to contain very specific guidance
on mission areas where we should either decrease or accept risk, and you must try to respect
this guidance when you develop your standards.
Note also that none of these attributes have obvious pass-fail criteria associated with them, and
instead are probably better represented by a continuum of values. Figure 8 shows a notional
value function associated with a survivability attribute; in this case, the metric is expected
personnel lost in a particular scenario, and the payoff values range from 0 100.
The representation of payoffs versus metrics is much more useful, because it allows you to
represent how we might value a continuum of outcomes, rather than simply stating the
standard is 72 hours. Note that the function in Figure 8 does not prevent you from asserting a
threshold value; for example, you might establish that alternatives that expect to lose more than
1000 personnel are simply unacceptable and will not be considered.
34
Also, expressing your evaluation criteria in a common scale (here, we are using an abstract
value scale) will allow you to investigate tradeoffs later on in the CBA. In the seabasing
example, the desire for high capacity would generally lead to a larger seabase, which is at odds
with the desire to minimize infrastructure.
100
Value
80
60
40
20
0
0
10
100
1000
10000
The methodology associated with the development and use of functions such as the one in
Figure 8 is generally known as multiobjective decision analysis, and there are many good books
available on the subject (see, for example, Kirkwood [1997]). Even if your CBA will eventually
use large-scale combat modeling and simulation to estimate warfighting outcomes, it is worth
spending the time to link attributes, metrics, and value functions. The process will help you
identify the most important measures, and will also demonstrate how some of the standards
may conflict (e.g., minimizing infrastructure while maximizing throughput).
If you cannot avoid the pressure to present single-number criteria, consider using the following
set of thresholds:
the minimum level of the measure required for mission success (go-no go threshold);
the minimum level at which measure is no longer a critical or pacing part of the
CONOPS (nominal performance threshold);
the level of the measure above which there is no real increase in mission effectiveness
(gold-plating threshold).
If you develop these three numbers, you will probably find that connecting the dots yields a
curve similar to the one in Figure 8. There is a range below which you cannot function at all, a
range that gives benefits as the measures improve, and a range above which increased
investment simply isnt worth it.
35
effects. The scenarios and effects result in capabilities, which represent the condition output of
the FAA.
Once the conditions are established, collecting doctrinal approaches allows you to derive an
overarching task structure. This structure, along with your literature search, will help you
decide which functions to analyze explicitly in the FAA, and represents the task output of the
FAA.
Finally, comparing the scenarios, objectives, and task structure to attributes available in the
joint concepts allows you to choose measures. Once you have developed a set of measures, you
can again go back to the strategic guidance and existing doctrine to develop value functions
(and also minimum performance criteria) for the measures.
Define
Military
Problem
Research
Origins of
CBA Tasking
Choose
Strategic
Framework
Examine
Candidate
Scenarios
Scenario
Sample
Selection,
Coordination
List Military
Objectives,
Capabilities
Specify
Conditions
Collect
Doctrinal
Approaches
Choose
Functions to
Analyze
Develop
Overarching Task
Structure
Derive Tasks
Figure 9. Major FAA tasks and flow.
36
Choose Relevant
Attributes
Develop
Measures
Develop
Standards
37
You should be careful when formulating these depictions. One of the problems with using the
DODAF is that it was intended for systems engineering applications, where it is critical that all
connections be documented and made to function (or else the machine wont work). You
cannot assess at that level of detail in a CBA, which examines an entire mission area or a broad
concept. You must settle for a more aggregate, higher-level view of the topic.
It may be difficult for you to compose a small set of graphics that depict the operations you are
analyzing. For example, if you were doing a CBA on undersea superiority, you may decide to
analyze an entire regional warfight to determine to what extent our undersea capabilities
determine the outcome of a particular scenario. But, you could still depict the conflict at the
theater level, and show how undersea superiority affected other aspects of the fight.
Alert
Establish
support
base
Move to
support
base
Move to
launch
Base
Transit
maritime
assets to AOR
Secure
tactical
area of
operations
Attack camp,
search, render
targets
Pursue & render
targets
Choose
course of
action
Exfiltrate force
and targets
Understand
targets
Return to
support
base
Establish battle
management and C2
Provide mission support
Assess effects
Figure 10. Task flow for a rendition operation from the Global Strike Raid Scenario CBA.
In addition to setting you up for your analysis, generating an operational depiction can also help
settle misunderstandings over terminology. Figure 11 shows the overarching engagement
sequence derived for the Integrated Air and Missile Defense (IAMD) CBA. This CBA took a
variety of kill chain depictions and derived a sequence to use as a common reference for
analysis. This allowed that CBA to depict existing doctrine using a single structure, which
allowed for comparisons of proposals and solved a considerable communications problem.
38
Detect
Track
Classify
TEWA
Engage
Assess
Air Force
Find
Fix
Track
Target
Engage
Assess
Navy
Detect
Track
Decide
Task
Engage
Attack
USMC
Surveil
Detect
Track
ID
Engage
Assess
JCMD
Detect
Track
ID
Allocate
(TEWA)
Engage
CO/TP
Find
Fix
Protect
Target
Engage
JFCOM
Fix
Track
Target
Engage
Assess
Notional
Plan
Find
Fix
Track
Target
Strike
Track
Fix
Target
Engage
Assess
Other
Sense
Monitor
Track
Decide
Act
Plan
Surveil
Detect
Track/ID
TEWA &
Alert
Assess
Engage
Assess
React
Sustain
Act (attack,
shoot,
defend)
Assess /
Reengage
(Sustain)
Figure 11. Derivation of the common Joint Engagement Sequence for the IAMD CBA3.
Notice that the operational outcomes of concern are obvious in both Figures 10 and 11. In the
case of Figure 10, the question is whether we captured the targets of interest, and what
consequences (losses, collateral damage, and so on) resulted from the raid. In Figure 11, the
outcome of interest is whether or not we successfully defended against the missile attack. In
both cases, the operational depiction allows you to communicate with clarity and economy how
you are portraying the mission.
You should also derive a set of operational depictions for enemy forces. After all, enemies have
objectives and alternatives open to them, and your assessment has to account for those as well.
Consider the diagram in Figure 12, which is based on the Global Strike Raid Scenario CBA.
This figure shows how enemy leadership targets relocate in a conflict on receiving warning.
While they spend most of their time in unhardened government facilities, on warning they
transit to a vehicle, then to either an informal hide site or a formal hardened site. If they get
warning while at one of these sites, they move again. Clearly, if your objective is to interdict
one of these leaders, then you must characterize how they move.
Note also that this is the last chance you will have to ensure that you are assessing the right
things. With a good set of operational depictions, you can go back to your chain of command
and your working group (or even the JROC) and ask is this what you wanted us to look at?
39
warning
leadership
vehicle
informal
facilities
formal, soft
facilities
warning
warning
formal, hard
facilities
Figure 12. Transition diagram for leadership targets from the Global Strike Raid Scenario CBA.
40
increasing
forces
represented,
time span
system vs.
system, single
engagement
Engineering
Level Model
Engagement
Level Model
Raid Level
Model
Campaign
Level Model
increasing
system,
environmental,
tactical detail
Figure 14 [Washburn, 1998, p. 3] is a less common depiction, and depicts analysis approaches
by technique as opposed to scope. To explain this figure, we will go from right to left. We are
all familiar with exercises, as they employ actual forces in real environments. There is no
ground truth in an exercise, as the actual exercise outcomes are subject to human perception
and judgments.
normative
evaluative
statistical issues
multi-sided
humans involved
multi-sided
no ground truth
game
optimization
theoretic
models
models
mathematical
models
man-in-theMonte Carlo
loop
wargames
simulations
simulations
exercises
ABSTRACTION
REALISM
Furthermore, exercises have opposing forces, so they are multi-sided and have humans
involved. Exercises also have statistical issues, in that the outcomes might be more a function
of the ability of the particular players involved than the capabilities possessed by the sides.
Finally, exercises are evaluative, in that they are not designed to produce solutions; instead,
they provide a way to demonstrate and measure proposed solutions.
Going to the left, wargames remove the ground truth issue, and specify the physical
environment and the mechanisms that produce combat outcomes. Man-in-the-loop simulations
are man versus machine methodologies, and do not have multiple sides involving humans
making decisions (if they did, this taxonomy would call them wargames). Monte Carlo
simulations use analytical schemes to do repeated trials of random effects such as weather and
bomb hits, and do not use humans in the loop. Finally, there are mathematical models that do
not simulate random effects, but instead evaluate operations, systems, and human behavior
analytically.
41
There is another class of models, which are shown on the left-hand side. These normative
models are not based on evaluating inputs such as CONOPS, but instead use sets of rules to
suggest solutions. Optimization models search a large set of alternatives and recommend
solutions based on maximizing or minimizing a set of quantitative objectives. Game-theoretic
models operate similarly, but add back in the multi-sided nature of military conflict.
Fine, you say. Ill take the game-theoretic model that represents the entire war at the
engineering level, so Ill get good estimates of all the outcomes and use techniques that
compute the best CONOPS.
Well, theres no such model, and theres no free lunch. The techniques on the left side of Figure
14 have significant limits in what they can represent directly, and you will likely have to give
up the ability to represent certain tasks or functions to get your operational depiction into a
form appropriate for those techniques.
Consequently, the question of choosing an analytical approach involves both warfighting scope
and technique, and there are substantial tradeoffs involved. It is unlikely that your CBA will be
so narrow that you will be analyzing purely in the engineering or engagement realm, so you
will probably be assessing at the raid or campaign level. If you tend towards realism, you must
involve humans, and that means your evaluations must be conducted in real time. If you need to
examine a large number of alternatives, then you will need abstraction (and analytical
expertise).
Suppose you have been given the Joint Forcible Entry Operations CBA. What range of
techniques would you employ to assess our current capabilities? An actual exercise is probably
out of the question, but wargames look attractive. But, can you set up wargames for all the
relevant forcible entry scenarios you identified in the FAA? Also, will one set of players for
each wargame be enough, or will you need to repeat the wargames with multiple teams?
Also, is it sufficient to concentrate on the forcible entry itself (which would be a raid-level
analysis)? Dont you need to consider the impacts of the forcible entry operation on the larger
campaign? If the forcible entry operation cannot be conducted as planned, is there a different
CONOPS that would allow the campaign to achieve its objectives, and is there some normative
model out there that could help produce that CONOPS?
Or, is the whole CBA really aimed at judging the contributions of a few proposed systems? If
thats the case, can you reduce the problem to a set of engagement level analyses, and avoid
dealing with all the forces and systems?
These are all difficult questions, which the actual Joint Forcible Entry Operations CBA team
confronted with varying degrees of success. The sheer difficulty of answering these questions is
all the more reason to do a quick look (Section 5), so you have some idea, prior to the FNA, of
where you should spend your scarce analytical resources. Invariably, you will have to strike a
balance between scope, techniques, and level of detail.
The following is a set of questions you should ask when evaluating analytical approaches.
Can the approach evaluate the doctrinal approaches collected in the FAA?
Can the approach estimate the measures of effectiveness tied to the FAA attributes?
Can the approach represent the scenarios, tasks, and functions identified in the FAA?
42
How much analytical overhead (i.e., estimation of outcomes not relevant to the CBA)
must be absorbed in the approach?
Does the approach require construction of a set of special-purpose models? If so, how
difficult will it be to win community acceptance of these models?
Is the approach agile enough? Can it quickly assess a large number of alternatives (US
and enemy CONOPS, scenarios, and capabilities)?
Do not let the availability of a particular tool or methodology (or the statement that a model is
validated) drive the analytic approach. The approach must fit the problem, not vice versa.
An aside: group methods. Several CBAs have employed expert judgment techniques, typified
by a variety of group voting and weighting methods. In the taxonomy of Figure 13, these fall
into the category of mathematical methods, because they are evaluative.
Despite their widespread use, we advise against relying on such techniques as your primary
method of estimating outcomes, causes, and needs. In the early days of JCIDS, many analysts
attempted to construct matrices to map systems to capabilities (or functions), and used groups
to grade the contribution of each system to the capability or the function. These grades, which
were normally presented using the typical red-amber-green scale, were supposed to yield
some sense of our adequacy in a mission area.
But, these methods are not very satisfying for estimating the outcomes of interest. Consider the
rendition task flow shown in Figure 10. How does such a method produce estimates of friendly
casualties or collateral damage? How does it assess the likelihood that the camp receives
strategic warning from force movements and scatters before the attack even occurs? And how
can it estimate the likelihood of capturing the targets?
To do this, you need to employ methods that represent the important physics of the situation
how fast both sides can move, how big their signatures are, what their detection capabilities are,
and how well they can fight in a direct-fire engagement in the terrains of interest.
Note that such an approach does not preclude the use of expert judgment. You can gather a set
of experts and have them estimate the probabilities of interest for the tasks shown in Figure 10,
which would not only be fairly quick but would give you the range of expert opinion on the
feasibility of executing that CONOPS. Furthermore, such an approach provides advice on
where you need to spend your analytical resources, because it helps identify tasks that either
appear critical or have widely differing views on their likelihoods of success.
The popularity of group methods is closely tied to the ideas of scenario agnosticism and
capabilities de nusquam. Since the early theories of capabilities-based analysis argued that we
could not represent actual environments and enemies, there was no way to represent the physics
of a situation. Consequently, well-intentioned individuals trying to do JCIDS assessments
found themselves in large conferences answering questions like, on a scale of 1 to 100, what is
the capability of this torpedo in achieving undersea superiority? Such scores were dutifully
compiled, averaged, and presented to three decimal places. While these numbers captured
prevailing opinion, they certainly did not amount to a serious assessment, and more often than
not just resulted in junk science.
We freely admit that there are many important considerations that do not have physics
associated with them. For example, you may be comparing two doctrinal approaches to an
operation, one of which requires getting basing rights from a single mildly uncooperative
43
country, while the other requires getting basing rights from four friendly countries. Which
requires us to spend more diplomatic capital? Does the expenditure of this capital even matter?
In these cases, you must resort to techniques based in expert judgment to make an estimate.
Nonetheless, we note that key performance parameters for major weapons systems are
measurable quantities. Consequently, you should assess in those terms if possible. If you are
interested in learning more about quantitative military modeling, several texts are available
[e.g., Loerch and Rainey, 2007].
44
45
the estimated results of executing those CONOPS, in terms of the measures developed in
the FAA;
Here is a very straightforward example. Consider what an FNA would look like if the scenario
were Operation EAGLE CLAW, the Iranian hostage rescue attempt conducted in 1980. The
mission would be to rescue a set of hostages held in Tehran, with the following measures and
constraints as dictated by the White House:
minimize the size of the planning group and the assault force; and
The CONOPS would be to use RH-53D minesweeping helicopters operating from a carrier and
MC-130 aircraft to move the SOF assault force, which would be supported by AC-130
gunships during the actual assault. An analytic approach would likely try to estimate the
following:
the likelihood of the enemy receiving strategic warning, either by exposure of the
planning process, detection of rehearsals or force movement, or by signals intelligence;
the likelihood of the assault force reaching the initial staging location at Desert One (the
probability of at least 6 of the 8 helicopters reaching Desert One, given the reliability of
the RH-53D at that time, was later estimated to be about 0.65);
the likelihood of the assault force reaching the U.S. embassy in Teheran;
the estimated outcome of the assault, in terms of losses and collateral damage; and
If your FNA analysis matched that of the Holloway Commission (which investigated Operation
EAGLE CLAW), you would conclude that the mission was high risk [Holloway, 1980, p. v].
Furthermore, your FNA would conclude that, for this scenario and CONOPS, the lack of
reliable, long-range lift from maritime platforms would be one of the overarching functional
needs. You might also argue that the CONOPS was likely to have difficulties because it
involved force elements from all Services and did not include joint training or a full-scale
rehearsal. Finally, you might point out that an alternative need would be to secure a land base to
avoid the complexities of employing helicopters from a carrier.
The point of the foregoing discussion is that the output of an FNA need not be couched in
strange, abstract language (or even linked to UJTLs, for that matter). The FNA results are
simply an assessment of how well we can do something, and an accounting of the reasons why
46
we cannot achieve mission success at an acceptable level of risk. If at all possible, you should
state the needs in quantitative terms.
This leaves the question of prioritizing needs (which are also called gaps in many JCIDS
documents). Within a particular scenario, such as the EAGLE CLAW example above,
prioritizing needs is probably straightforward. If the assault force cannot get to Teheran with
high probability, then the assault cant even occur. So securing highly reliable lift is a necessary
(but not sufficient) requirement for mission success.
Prioritization becomes difficult when you are assessing multiple scenarios across the breadth of
the defense strategy, and you end up with a disparate list of needs. This list will usually contain
a few things that are common to all scenarios, so their pervasiveness probably makes them a
high priority. On the other hand, some needs that are backbreakers in one situation (such as the
need to defeat enemy air defenses) may be irrelevant in others (irregular situations where the
enemy has no air defenses). So how do you provide this prioritization?
One reason we have stressed examining the strategic guidance in this document is that the SPG
and CPG in particular contain a great deal of advice on priorities. The best prioritization
scheme is one where you can directly lift the priority information out of these documents and
apply it to your needs, so that you have a clear source for how you have ordered your needs.
At this stage of the assessment, however, prioritization is not that critical. Since you have not
yet investigated solutions and their costs, having a priority of need is less important than
identifying the set of crucial needs that are dragging down the likelihoods of operational
success. Suppose, for example, that one of your needs ends up as the bottom priority. But, you
subsequently discover in the FSA that the need can be filled by a policy change, using existing
capabilities, at little or no cost. Clearly you would recommend that action, because the costs are
inconsequential.
It is critical, however, to provide the linkage from your needs to your estimated operational
outcomes for each scenario, in terms of your MOEs. This allows senior decision makers to
consider both the likelihood of the scenario occurring and the consequences of failure, which
are the major components of risk. It also allows them to perform their own calculus in terms of
tradeoffs among MOEs (e.g., the inevitable tradeoff between confidence of killing a target and
expected collateral damage).
What if there are no problems? There is one other outcome that may occur in an FNA: you
may discover that there are no needs. This could happen due to changes in strategic priorities
(such as the collapse of a particular enemy), a new application of existing CONOPS, or the
simple exposure of operational combinations that had not been considered in a unified
assessment. To give a recent historical example, the reason the DOD cancelled the Crusader
program was that the leadership felt that the need the Crusader was aimed at was addressed
adequately by other combinations of existing systems.
Concluding that there are no needs will be very controversial. Some important group pushed to
have a CBA done, and they did so with the firm belief that there was some operational problem
that needed to be fixed. They will not react well if you tell them they were wrong, so you will
have to do considerable consultation with your chain of command on how to bring your
assessment forward. Note that if you execute your analysis on the case-by-case basis we
recommend in Section 7.4, the story will grow over time, and your opponents will not be able
to accuse you of sucker-punching them with a completely unexpected outcome. They will still
be upset, but you can honestly respond that they should have seen it coming.
47
48
Collect and
inspect
performance
data
Select and
finalize
analytical
approach
Scenario
Analysis
for each
scenario
Analysis
Reconciliation
Choose
bestunderstood
scenario
Scenario
Analysis
Analysis
Preparation
Identify
unacceptable
outcomes
Document
causes from
analysis
Present to
working
group
Reconcile working
group comments
Derive needs in
terms of
operational
depiction
Analysis
Reconciliation
Needs
Development
49
50
51
minimize the size of the planning group and the assault force; and
The third and fifth of these conditions resulted from the direct action CONOPS, but you can
envision other approaches that obey the other four edicts.
The response to any scenario has alternatives rooted in policy changes. We have already
advised you to include policy expertise on your core team, and you should employ that
expertise in the FSA.
A policy change will almost always imply a CONOPS that is different than the baseline
CONOPS written into an Analytical Agenda scenario. This will create a substantial
bureaucratic challenge for you, because the baseline CONOPS was developed by a large
number of people and went through lengthy coordination. As a result, deviating from this
baseline will certainly spark protests. JCIDS, however, demands that you analyze alternative
CONOPS, so you can use that edict in the instruction to solicit proposals from your working
group.
Also, you do not need to produce a consensus CONOPS as you would if you were constructing
an Analytical Agenda scenario baseline. Since you are looking for solutions, you can collect
multiple CONOPS and evaluate them, with an eye towards identifying under what conditions
one CONOPS would work better than another.
52
they are feasible, both from the standpoint of policy and technology; and
they are strategically responsive; that is, they deliver solutions when they are needed.
JCIDS is not a mature process, and one of the most immature parts of JCIDS is how it should
treat affordability. Historically, the requirements processes operated by the JROC have
restricted their advice to operational needs and solutions, and have left the question of whether
a particular solution is worth the investment to the larger programming and acquisition
processes. But, you can no longer avoid the issue in a CBA.
For existing programs, gathering cost estimates is a matter of contacting the program office or
OSD/PA&Es Cost Analysis Improvement Group (CAIG). The program office estimate and the
CAIGs estimate will generally differ, so you should inspect both.
Also get manpower estimates. Many current programs have justified increased materiel expense
via reductions in manning requirements. You will have to determine whether or not these
manpower reductions are captured in your cost data (as they would be in life-cycle costs), or
whether they are not accounted for in developmental and per-unit acquisition costs.
Demonstration programs or proposed programs are much more difficult. In these cases, you
may have to commission a separate cost analysis to estimate what it would take to get the
capability. If that gets to be too difficult or contentious, an alternative approach would be to get
the high and low estimates and evaluate the worth of the capability for both cases.
JCIDS does not insist that you produce a very detailed independent cost estimate such as those
required by the acquisition community. What you should be able to do, though, is characterize
the 20-year life-cycle costs (or savings) of the things you are proposing, in terms of:
developmental costs;
53
These are the four critical cost components in any solution. We have to pay to develop a
capability, provide a place to house it in peacetime, procure it, and operate, maintain, and staff
it. If you can portray your CBA solutions in those terms (with rough, but reasonable estimates)
and contrast them with the program of record, you will have provided a complete economic
picture to the leadership. It is one thing to prioritize needs and potential solutions, but quite
another to choose what solutions you really procure. Everyone has a vision of the car they
always wanted; few can actually afford it.
Next, you must develop rough estimates of the technical feasibility of your proposed solutions.
You cannot work at the engineering level, because you will be considering a broad range of
possibilities. You can, however, use a framework like the following to classify the technical
risk of your alternatives.
No risk. This is a new use of existing systems, such as employing strategic bombers for
close air support.
Very low risk. This is a new use of systems that are due to be fielded in the near term.
Low risk. This is a new combination of existed or programmed subsystems that will
require new integration, such as equipping the existing Predator UAV with the existing
Hellfire missile.
Medium risk. These options require development of new equipment and systems, but
they do not require technological or industrial advances.
We note that JCIDS has begun using the more formal framework described in Defense
Acquisition Guidebook [Section 10.5.2, 2006], which describes nine Technology Readiness
Levels (TRLs). This is a framework similar to the one above, but contains more precise
definitions of maturity of the technology. Clearly, you will need legitimate technology experts
to make these kinds of assessments.
This leaves the question of strategic responsiveness. If you have done your job correctly in the
FAA, you have characterized a range of security challenges that are supported by the area you
are assessing. Are these problems that need solutions now? If not, when do we think that they
will really become a problem, and whats the likelihood that they never become a problem?
JCIDS is oriented at looking 7-14 years into the future, but many of the issues being considered
in CBAs exist in the current day, and embarking on a leisurely 15-year development program
may not be the best option.
Since you are dealing with uncertain futures, the question of strategic responsiveness contains
considerable uncertainty. As a result, you may choose to frame options in terms of when they
can be realized. As an example, the DOD realized in the early 1980s that it had a significant
airlift shortfall. Rather than wait for the full execution of the C-17 program, the DOD opted to
restart the C-5 production line and also procure a number of KC-10 tankers that had significant
cargo-carrying capability. This solution had three components: a short-term fix with the KC-10;
a mid-term fix with the C-5B; and a long-term fix with the C-17.
54
If a future threat has considerable uncertainty, then a longer-term hedge approach may be the
better choice. People often decry the two-decade gestation period of the F-22 fighter, which
was originally developed to prosecute the air war against much larger Soviet forces. When the
Soviet Union dissolved, the F-22 program was delayed but was still retained as a hedge in case
a similar threat was resurrected. The Soviets did not reappear, and the F-22 is now being
procured in much smaller numbers than originally planned. Now, we will not debate whether
this is by design or by accident, but the point is that recommending that the entire force be
remade at great expense to address a challenge that may appear in 20 years is not particularly
prudent when a hedged approach is available.
If you are proposing materiel solutions, you probably need to consult with the acquisition
community on when those solutions could be fielded, assuming prompt budgetary action. Many
things have to happen in any acquisition, and all of them take time that you need to estimate.
Too many DOD analyses present recommendations that are simply Christmas lists. They throw
every solution imaginable at a problem, propose gold-plated systems, ask for ill-defined, magic
technologies, recommend concepts that only apply in a narrow (or even fanciful) set of
operational conditions, or outline solutions that are only realizable after three decades of
development. We assume that, if for no other reason than professional pride, you do not want
your CBA to become one of these analyses. The only way to ensure that is to seriously assess
affordability, feasibility, and strategic responsiveness.
Now, the situation may be such that your CBA does not have to consider solutions that
decrease costs. After all, the leadership wanted your mission area examined, and they likely
wanted it examined because they jointly committed to the need to improve it. But the first two
portfolio options should be an output of your FSA regardless. You have to give good advice on
the upper bound of realizable solutions, which is the cost-unconstrained case. You should also
give good advice on the best cost-neutral solution, as this, coupled with the cost-unconstrained
case, gives the leadership an estimate of the range of payoffs possible with additional
investments. This is not the oft-criticized budget-driven approach to analysis, where the
objective is to pay some bill by cutting capability in a mission area. Instead the idea is to
characterize the spectrum of investment options and operational payoffs.
55
In addition, creating and analyzing the cost-decreasing case has the benefit of characterizing
where the bulk of the costs lie in the legacy force, and whether those costs are commensurate
with their contributions to your mission area. As an example, the DOD established continual
fighter orbits over most major US cities after the attacks of September 11, 2001 to allow for
rapid intercepts of any additional hijacked airliners. Clearly, a modern fighter such as an F-15C
is grossly over designed for shooting down an airliner, and a CBA on this operational need
would likely recommend a completely different portfolio of approaches if time were available
to change procedures (such as passenger screening), modify existing systems (such as putting
armored doors on crew compartments of airliners), or even procure inexpensive air-to-air or
surface-to-air intercept capabilities. The point is that trying to employ legacy forces on the
cheap to accomplish certain operations may reveal a substantial force capability-operating costneed mismatch that you wouldnt have detected otherwise.
Another useful organizing framework addresses the uncertainty of having critical capabilities
that are outside the scope of your CBA. For example, the DOD has committed to fielding the
Global Information Grid (GIG) as a way to share information. Unfortunately, we dont know
when (and perhaps if) the GIG will be realized, and your options are likely very different
depending on the GIGs availability. This could lead to three portfolios:
GIG assumed available, but solutions hedged against GIG not being available; and
Other frameworks could revolve around strategic risk guidance across future security
challenges (accept risk in one area to improve performance in another), choice of employment
domain (ground, sea, air, space, or cyberspace), or even force basing posture (use CONUSbased or overseas-based forces). Our point is that choosing a few of these frameworks makes it
much easier to assemble sets of options that are linked to overarching themes.
This leaves the question of how to assemble a portfolio option given a particular framework.
This has to be a part of your FSA analysis plan, because you will quickly discover why Wall
Street investment managers are paid so much money to assemble mutual fund portfolios. It is
not an easy job, particularly if you are trying to find the best mix of options across multiple
MOEs and affordability, risk, and responsiveness criteria.
As a result, you should seek a methodology that looks at lots of options. Too often, large DOD
studies devolve to a slide that recommends three possible courses of action, one of which is
obviously preferred, one of which is an obvious throwaway, and the last is included to satisfy
the preferences of some particular senior leader or influential group. While senior leaders and
influential groups can only be ignored at your own peril, their views should not artificially limit
your ability to consider a large number of combinations. Analysts in the optimization
community routinely solve problems with tens of thousands of variables and thousands of
constraints, so computational capability is not the issue.
Using such approaches, however, puts you firmly into the realm of abstract tools as shown in
Figure 14. So, you will have to invest some of your time into understanding how the contents of
your various portfolios were generated if you choose one of these approaches.
Heres some advice on inspecting the contents of solution portfolios.
investments, you may be in danger of producing a set of solutions that are optimized only
for particular situations, and are not useful otherwise. Conversely, if your portfolio
contains nothing but general-purpose solutions, you may be in danger of producing a
team full of decathletes competent, but likely to be beaten by a team that contains some
number of specialists.
How much is the portfolio at odds with current investment trends? If your portfolio
calls for, say, a doubling or tripling of funding in a mission area that has not yet resulted
in an actual operational disaster, you will have a difficult time making your case. Even
when risks are understood (such as what would happen if the levees protecting the city of
New Orleans failed in a hurricane, as they did in 2005), it is very difficult to overcome a
long history of no disaster.
One type of CBA that may present a challenge for the portfolio approach is one that proposes
an operational concept, such as seabasing. But, even these types of CBAs contain different
options. For example, a possible theme for alternative seabasing portfolios could be organized
around the question of what type of force to seabase (SOF, ISR, fixed wing aviation, or a full
Marine Expeditionary Force). This framework would result in multiple options, and would
work well in bounding the available seabasing alternatives.
The final challenge with assembling a portfolio is that you are selecting a set of options that
presumably optimize something. Thats easy, you say; Im trying to optimize the likelihood of
mission success. But, recall that way back in the FAA you developed a set of measures to judge
the value of a particular CONOPS. Those measures are what you should be using to evaluate
the mission effectiveness of your portfolios.
Some of your measures will be at odds with each other. For example, the option that minimizes
expected collateral damage may have a low lethality. The existence of such conflicting aims is
why analysts do so-called trade studies; they use these studies to find out how various
operational goals trade off against each other.
Your natural response to this may be to interrogate the relevant decision makers on their
priorities. This almost never works, because:
you cant get enough time with the right decision makers to unambiguously determine
their (multidimensional) priorities;
the decision makers will disagree on what the priorities should be;
the decision makers dont have well-formed priorities (because if they did, they wouldnt
have asked you to do the CBA!); and
there is no guarantee that they wont reject your recommendations, even if you used their
priorities.
A better approach is to examine your measures and try to discover which sets of priorities cause
the recommended portfolio to change. Returning the Iranian hostage rescue, prior to Desert One
the impetus was on planning secrecy and minimal force size. In the subsequent planning for
another attempt after Desert One, these imperatives were much less important, and a CBA
aimed at such a scenario would likely recommend a much different approach.
57
Note that your measures in and of themselves can provide a framework for portfolio options.
You could have, say, a minimal collateral damage portfolio, a maximum lethality portfolio, and
a minimum force size portfolio. This approach also directly addresses the issue of conflicting
operational goals, because you (and your target audiences) can see how the solution choices
change in the portfolios as the measures change.
The construction of a portfolio requires weighing the major components of the possible
solutions: their mission effectiveness, their affordability, their technical risk, and their strategic
responsiveness. The frameworks you choose will dictate how you use these components in
constructing portfolios, and you should be able to formulate several interesting portfolios that
contain a mix of approaches, such as the airlift example we discussed above.
8.6. Excesses
At various places in the formal JCIDS documentation, you will find references that require you
to identify things called redundancies and overlaps. We advise that the appropriate place to
propose capabilities that are excess to needs is in the FSA. It is not until you know the needs,
the spectrum of realizable options, and the resource implications of those options that you can
judge something as being unnecessary.
Admittedly, this is very dangerous territory. Previous Joint Staff requirements processes only
commented on needs, and did not offer up offsets (a polite term for budget sacrifices). JCIDS,
however, mandates that the issue of excesses be examined, so you will have to consider them.
The advice above on portfolio frameworks also gives you a tenable way to bring forward
candidates for excess. If you build a cost-neutral or a cost-saving portfolio, something in the
programmed force will likely become an offset make room for a more efficient capability. So,
that framework will automatically identify excess candidates.
Using risk guidance from the strategic documents is another way to present a framework for
excesses. For example, DOD funding for irregular warfare capabilities has increased
dramatically since 2001, while several traditional warfighting systems have been cancelled or
curtailed. The implication is that strategic priorities have changed what we view as excesses in
general-purpose forces.
Your challenge is that you have to stay within the scope of your CBA. The current state of
JCIDS is such that you are not allowed to declare excesses in other mission areas beyond the
scope you defined in the FAA; you are not allowed to propose gutting the Defense Commissary
Agency to pay for new capabilities that your CBA needs. Nonetheless, you will be looking at a
broad range of force elements and systems, and if you defined your scope correctly, you will
have a large trade space. Some of these forces and systems are only applicable to your mission
area, so you can comment authoritatively on whether they are redundant.
The more difficult problem is judging general-purpose capabilities. Some time ago, the DOD
opted to remove nuclear weapons capability from the B-1B bomber, as it was deemed to be
excess to that particular mission. This decision did not, however, make the B-1B excess in
general, as it has considerable conventional capability. To declare it as excess in general would
require examining its conventional capabilities.
This unfortunately leaves the defender of any capability a trump card. As long as a program
defender can demonstrate his program has utility outside of your CBAs mission area, then they
can claim you cant brand it as excess without further study. Your best option in this case is to
identify it as excess to your mission area, and let the JROC decide whether to commission the
additional study.
58
Since it is easy to construct a situation where the thing you believe is excess is the exactly the
thing we had to have, we have stressed assessing a broad range of operational situations. If you
have an adequate scenario sample, and the resource in question doesnt compete favorably in
any of them, then you will have a strong case. Similarly, if the resource only provides minimal
improvements in cases where the strategic guidance says we can take risk, it may also be
excess, particularly if it is expensive.
It is unlikely that you will get concurrence on excesses from your working group. Everything in
the DOD is supported by someone, and they will execute their counterfire plan as soon as they
hear you may threaten their program. As a result, you will have to do excess determination
within your core team, bring your recommendations forward to your leadership, and do
considerable planning on how to introduce your results so they do not suffer crib death.
In these cases, disagreement is unavoidable, so dont waste time trying to avoid it. Instead,
ensure you have solid, defendable arguments for excesses, and ensure that your arguments
make it to the senior leaders.
59
FAA measures
FNA needs
Collect
policy
alternatives
to needs
Choose
portfolio
frameworks
Collect
material
alternatives
to needs
Alternative Feasibility
Estimate affordability
Formulate and
finalize portfolio
construction
approach
Generate
portfolios for
each
framework
Portfolio
Planning and
Generation
Alternative
Generation
Identify
potential gamechanging
alternatives
Estimate strategic
responsiveness
Investigate CONOPS
for game-changing
capabilities
Scenario Analysis
(new alternatives)
Recommend
experimentation,
research
GameChanging
Capabilities
Identify excess
candidates
internally
Excess
Analysis
Formulate
bureaucratic plan
for excesses
Staff portfolios,
excesses
60
To break a bureaucratic logjam. Large bureaucracies like the DOD tend to stall new
actions, so frustrated senior officials occasionally sweep aside normal procedures and
commission a special effort to get something assessed when the organizations that normally
do the work cannot do so. In such cases, you should find out why those organizations could
not deliver, and keep those reasons in mind as you execute your assessment. You will
probably also have to rely on information those organizations have developed.
To address an emerging need. While the DOD has a separate Joint Urgent Operational
Need process for current warfighting issues, senior officials may decide that immediate
examination is required to move the DOD towards finding enduring solutions. The CBA
described in Appendix A is such a case. Such assessments normally do not require much
FAA or FNA work, because the shortcoming has already been demonstrated. However, it
may be very challenging to come up with enduring solutions on a short timeline.
To settle a disagreement. The DOD contains many large organizations which periodically
find themselves at odds with each other. In this case, the Quick Turn CBA is a form of
arbitration. Of course, the challenge here is that you are in the middle, and you dont want
to be crushed between collisions of large bodies.
To pull together a set of disparate examinations. The division of labor in the DOD
sometimes makes it impossible to conduct an integrated examination of an issue. Consider,
for example, the issue of distinguishing among friends and foes in combat. While this is
everyones problem, it does not belong to any particular Service, and has only recently
61
been examined in any sort of integrated fashion. The challenge in this type of CBA is
finding all the pieces and then finding a way to assemble them.
62
Schedule. List the major phase points only; there will not be more than two or three of
them in a 30- to 60-day effort.
RESOURCES (i.e., Service Support).
Organizations supporting the working group. List the people youll use and their
providing organizations, if known.
External resources. Estimate the funding necessary, or describe what tasks you are
diverting resources from to support the CBA.
Classification. State the classification level of the CBA, and whether obtaining higherlevel accesses (or people who already have those accesses) is a limiting factor to meeting
the deadline.
OVERSIGHT (i.e., Command and Signal)
Governance. List the groups (hopefully not more than two) that will oversee your
assessment.
Communications. List the final products to be delivered (briefing, report).
You should write this immediately based on what you know, and you should be able to fit the
initial versions into two or three pages. If you have some part of your team assembled, work it
over with them. Then, walk it back up the tasking chain as far as time and bureaucratic
constraints will allow.
Typically, you will have an initial meeting where you find out youre running a Quick Turn
CBA, with an invitation to come to the next meeting to finalize the tasking. If youre really
agile, youll show up at the follow-on session with a document like the one outlined above.
Working over the words will prevent a great deal of misunderstanding, which you cannot afford
in an accelerated effort.
Also, you can maintain this document as a management tool for your working group.
a logjam, there are likely competing views on the availability and costs of the alternatives. You
may be able to do the assessment using these views as bounds (e.g., what should we do if the
thing actually costs X), but if these parameters are truly unknown, you will have to devote some
time to estimating them.
The issue is similar for adversary and policy expertise. If your CBA is aimed at a particular
scenario and a specified enemy with well-understood policy and force employment boundaries,
you may not need dedicated experts. We warn you, however, that the choice of opponent and
operational situation drives the conclusions, so be careful about dismissing these needs too
quickly.
The question of streamlining your organization really boils down to this: where is the
uncertainty? What is it about this decision that we need to investigate? The answer to this
question really defines what you need on your team and how much you can accomplish in a
short timeline.
Furthermore, the type of analytic work that can be done is dictated by the area(s) of uncertainty
and the timeline. If you attempt to do analytics that involve operational evaluation or portfolio
construction, then you will need a lead analyst that is very creative. In particular, you will not
have time to execute the normal way of estimating operational outcomes (unless you are in the
rare position of being able to find and exploit work that has already been done). Mass wont
help, either; adding more analysts will slow you down. Instead, insist on getting someone who
has shown he can do the job under these conditions.
It is highly unlikely that the decision makers that task you with the assessment will give you a
team. It is also unlikely that the right team in place and available (or even known) to you. So,
you will have to conduct some sort of draft. Now, here is where you can exploit your chain of
command, as they have been around longer than you and generally have a broader range of
contacts. What you should do, once you have some understanding of the task, is write down the
types of people you need as part of your five-paragraph order. Then, take that list back up the
chain and see if you can get help in getting those types of people. Even if you are very
confident that you know who you need, you should try to exploit your seniors knowledge of
their organizations to get the right expertise.
You will also have to control the number of people who want to subscribe to your study. Most
short-fuse efforts have high priority, and will attract a large number of rubberneckers. In this
case, you will have to be brutal and combine the working group and study group into one team,
and simply banish spectators that are neither contributors nor decision makers. The process we
recommend in Section 3 of having a study group produce and a working group review will be
problematic; if you cannot avoid such an arrangement, at least force the working group to be a
small as possible, and do not produce extra materials for them beyond what you are producing
in the course of the effort. Unless your needs for functional skills dictate otherwise, you should
not allow more than one representative on your team from any external organization.
An aside: directed telescopes and theater critics. Historian Martin Van Creveld coined the
term directed telescope to describe a commanders use of a special, trusted officer or agent to
bring him information directly (see Griffin [1985] for a complete discussion). If you are
assessing a hot issue on a short timeline, you may end up with such a person on your team. If
that person is serving the decision maker who commissioned the CBA, then you may have to
deal with some difficult issues.
First, recognize that the best arrangement is for you to be in regular contact with the decision
maker, rather than someone who ostensibly is working for you but instead is someone elses
agent. If that arrangement is impossible, all is not lost. After all, it is likely that the directed
64
telescope will give you more direct access and faster feedback than you could get otherwise.
So, see if you can make the situation better support your assessment.
Second, you need to make sure that you are in the lead, and that the directed telescope does not
take over. Decision makers will usually appoint someone who does not attempt this sort of
thing, but occasionally you will encounter a liaison who feels compelled to wave his patrons
gun in your face. In these cases, you will have to fall back on your experience to reassert your
authority.
The notion of a theater critic is less well-documented, but is nonetheless a substantive issue.
This situation arises when some organization refuses to provide representation for your Quick
Turn CBA. Ordinarily, this would be fine. But, if that organization also has veto authority over
your results, you have a theater critic a person, group, or organization that does not participate
in the production, but will judge, and possibly kill off, the finished product.
Almost any organization can opt to play theater critic, and you will not have the time to
maneuver, cajole, or shame them into participation. What you can do, however, is to offer them
one or two progress briefings during the course of your assessment. This will eat into your
already-tight schedule, but will allow you to expose issues that you might not see coming until
the end game (when it is too late).
65
simply cannot work full-time on your effort. If you have summaries, they can scan them and
catch up on what happened while they were working elsewhere.
You will still need a task structure of some kind and a set of measures, but you dont need to
have those perfected at the start.
Figure 17 below shows a possible adjustment of the FAA process. It makes several steps
parallel; more importantly, it assumes that the scenarios and functions are settled in the initial
negotiations over the assessment, along with some guidance on relevant standards.
Consequently, you will specify, rather than coordinate, what operational cases and functions
will be assessed, and you will be drafting task structures and final measures.
Note that gathering doctrinal experts is an important first step. Getting them will allow you to
draft a five-paragraph order that is coherent enough to discuss with your management.
Figure 18 shows how an FNA might be compressed. You still will have one or more
operational situations that you are considering, but you will not have to go through a lengthy
reconciliation step with a number of outside organizations. In a Quick Turn CBA, your working
group will do the work and then move on to the next situation.
Remember that the FNA is designed to evaluate doctrinal approaches using programmed
forces. If this evaluation has occurred in a prior study or in an actual operation, your FNA
evaluation just consists of citing that work and justifying that the work is valid and applies to
your assessment. Furthermore, if the needs are specified as part of the tasking, you may not
need to do an FNA at all.
Figure 19 shows how you might collapse an FSA to accommodate Quick Look timelines. You
will note that it doesnt appear to be compressed much; in fact, the only task that drops out is
the need to staff excess candidates.
But, several major tasks may not apply to your Quick Turn CBA. For example, there may be no
need to do excess analysis. In addition, the issue of portfolios may collapse to recommending
one alternative from several choices, so the entire need for portfolio generation under different
frameworks disappears. Also, you may not uncover any game-changing capabilities, or you
may decide that the ones you have found can be executed adequately with existing CONOPS.
66
Receive
Tasking
Recruit
Doctrinal
Experts
Draft, Revise
5-Paragraph
Order
Specify
Scenarios
Specify
Functions to
Analyze
Derive Related
Military
Objectives,
Capabilities
Specify
Conditions
Draft Overarching
Task Structure
Derive Tasks
Draft Relevant
Attributes
Draft Measures
Develop
Standards
The important point in this entire discussion is that the Quick Turn CBA is not eventdriven; it is time-driven. Your challenge is to first, decide which tasks need to be done, and
second, divide your available manpower and calendar time to those tasks. As a starting tactic,
you may take all these tasks and group them into three categories:
tasks that have been already been answered (either by management direction or
previous study); and
Another aside on group methods. Recall that in Section 7.2 we warned against using group
methods as the primary means of estimating outcomes, causes, and needs. Unfortunately, the
time-driven nature of the Quick Turn CBA may drive you to do exactly what we warn against,
because your schedule wont allow you to do anything else. So, how do you reconcile this
conflict?
First of all, if you have clear, logical, qualitative arguments for your causes, needs, and
recommendations, you should use them. In this case, you have to make the case using short
papers and not briefing slides, because slides simply do not allow you to transmit enough
information to make a logical argument in a short document (for more on this, see Tufte
[2003]). Also, assigning scores to some sort of qualitative argument just to make the analysis
appear quantitative usually obscures the argument. Worse, if your target audience detects that
you did this, they will more often than not conclude that you are trying to deceive them. They
understand that you are operating under tight deadlines, so there is no need to add unnecessary
numerological veneer. The Gettysburg Address worked just fine without stoplight charts or
weighting schemes.
If you do rely on group methods such as value-focused thinking or the analytic hierarchy
process, be very clear about what you used those methods for. Were you using them to estimate
67
Merely citing techniques will not suffice here. In any CBA, but particularly a Quick Turn CBA,
you will have to communicate and defend what you believe about the four points above.
Collect and
inspect
performance
data
Select and
finalize
analytical
approach
Choose
bestunderstood
scenario
Scenario
Analysis
for each
scenario
Analysis
Preparation
Identify
unacceptable
outcomes
Document
causes from
analysis
Derive needs in
terms of
operational
depiction
Execute
operational
analysis with
doctrinal
CONOPS
Review and
refine results
Scenario
Analysis
Needs
Development
We have some final advice on designing to time. First, even if you are given a 30-day deadline,
recognize that it will take at least one week to refine your final presentation with your
management and brief your oversight groups. For quick-turn issues going to the JROC, one
week appears, unfortunately, to be a minimal number.
Second, if you plan on issuing some sort of data call to the Services or Combatant Commands,
this will also take at least a week. Now, if you are clever, you can be doing other work with
68
your study team during that week, so you can do some tasks in parallel. But the fact is that if
you must go out to the Combatant Commands, it will take them a few days to understand what
you want and give you coherent responses. Clearly, you can move faster and retain more
schedule control if you gather the information yourself. But, if you gather input by a staff
action, it will not happen overnight.
FAA measures
FNA needs
Collect
policy
alternatives
to needs
Choose
portfolio
frameworks
Collect
material
alternatives
to needs
Alternative Feasibility
Estimate affordability
Formulate and
finalize portfolio
construction
approach
Generate
portfolios for
each
framework
Portfolio
Planning and
Generation
Alternative
Generation
Identify
potential gamechanging
alternatives
Estimate strategic
responsiveness
Investigate CONOPS
for game-changing
capabilities
Scenario Analysis
(new alternatives)
Recommend
experimentation,
research
Identify excess
candidates
Excess
Analysis
69
GameChanging
Capabilities
Since the Quick Turn CBA does not allow for a lengthy oversight and staffing process, you will
not have multiple independent reviews of your work and your conclusions. In most cases that is
exactly what the leadership wants, because they do not want another study that regresses to the
status quo. But, you will have to justify, either by the expertise of your group or the existing
work that you cite, any recommendations that substantively change the current program. The
only case where this will not apply is when the status quo is unexecutable, such as an
assessment done in the wake of a major program cancellation.
Finally, do not forget that you can recommend experimentation. We experiment to test theories
and avoid risky commitments, and it is perfectly legitimate for you to point out the cases where
this would be a good approach. Now, if your marching orders expressly forbid recommending
more study or experimentation; dont violate them. However, ensure you communicate that you
have not been able to reduce the uncertainty of an option sufficiently to make a clear-cut
recommendation, and ensure that your results contain sufficient information so that the decision
makers can decide whether to take the risk.
To conclude, remember that the circumstances that generate Quick Turn CBAs mean that a
decision is imminent, and will be made regardless of what you deliver (or dont deliver).
Consequently, a Quick Turn CBA is large opportunity to influence the direction of the DOD.
As such, it is critical that you scope and negotiate the tasking quickly, recognize and plan
around the time-driven nature of the assessment, and communicate the strengths (and
weaknesses) of your results.
70
10.
A Twenty-Question Summary
In this paper, we have put ourselves in your position, that of someone trying to execute a CBA. We
have covered what JCIDS is trying to do and how it connects to the overarching Defense Strategy and
the Joint Operations Concepts. We have translated what it asks for into an analytical framework that
should be directly applicable to your assessment. We have advised you on what talent you have to
procure, how to organize, how to execute, and where and when to expect resistance.
But, it has taken us quite a few pages to explain all those things clearly. So, as a summary, we offer
something common in the military: a checklist. What follows are the most important things you have
to do to conduct an effective CBA.
So, ask yourself the following questions as you fight your CBA campaign.
1. Do I really know why Im doing this CBA?
2. Do I really understand the relevant strategic guidance, including the concepts?
3. Do I have the right people for my core team?
4. Do I know how Im going to lead my core team?
5. Do I know how Im going to function with an external working group?
6. Is my set of scenarios sufficient to cover the breadth of the strategy, and are they tied to
a relevant strategic framework?
7. Have I scoped my assessment in such a way that it both answers the questions and is
doable in a reasonable amount of time?
8. Do my operational depictions, task structures and measures flow directly from the
scenarios and CONOPS?
9. Does my quick look assessment provide an adequate view of the road ahead and bound
what I expect to conclude?
10. Do I have an analysis approach that is agile enough to consider a broad set of
alternatives, and does it account for the enemys operational alternatives?
11. Does my analysis approach represent the contributions of the alternatives of interest
and estimate the measures of interest?
12. Have I collected a solid, defendable set of doctrinal approaches using the programmed
force?
13. Do I have solid, defendable estimates of the mission effectiveness of those approaches?
14. Have I correctly identified the causes and resulting needs from my estimated
operational outcomes?
15. Have I developed promising policy, materiel, and CONOPS alternatives?
16. Have I found any game changing capabilities, and have I been able to describe feasible
CONOPS for them?
17. Do I have reasonable estimates of the affordability, technical feasibility, and strategic
responsiveness of my materiel alternatives?
18. Do I have a good set of alternative portfolio frameworks?
71
19. Have I generated a compelling set of portfolios for each framework that gives my
decision makers a real set of options?
20. Have I identified excess capabilities, and do I have a bureaucratic plan for bringing
them forward?
If the answers to all of the above are yes, you probably wont have to ask yourself the following
question:
In the future, do I want to tell people that I ran this CBA, or do I want to deny any
involvement?
We hope you find this checklist useful if for no other reason than your leadership will probably use
it. JCIDS asks for a great deal out of a CBA, but if you succeed, you will move the DOD forward in a
significant way.
72
11.
References
Aldridge, Pete, Joint Defense Capabilities Study Final Report, Joint Defense Capabilities Study
Team, December 2003.
Brooks, Frederick P. Jr., The Mythical Man-Month: Essays on Software Engineering, AddisonWesley, 1995.
Chairman of the Joint Chiefs of Staff Instruction 3170.01F, Joint Capabilities Integration and
Development System, December 2006.
Chairman of the Joint Chiefs of Staff Manual 3010.02B, Joint Operations Concepts, 1 December
2005.
Chairman of the Joint Chiefs of Staff Manual 3170.01C, Operation of the Joint Capabilities
Integration and Development System, December 2006.
Crissman, LTC Doug, The Joint Force Capability Assessment (JFCA) Study and the Development
of Joint Capability Areas, briefing, 7 March 2005.
Department of Defense Architecture Framework Working Group, DOD Architecture Framework,
Vol. 1: Definitions and Guidelines, 30 August 2003.
Department of Defense, Defense Acquisition Guidebook, http://akss.dau.mil/dag/, 24 July 2006.
Department of Defense Directive 8620.1, Data Collection, Development, and Management in Support
of Strategic Analysis, 6 December 2002.
Department of Defense Instruction 8620.2, Implementation of Data Collection, Development, and
Management for Strategic Analyses, 21 January 2003.
Department of Defense, Global Strike Joint Integrating Concept, 10 January 2005.
Department of Defense, Rescue Mission Report [Holloway Report], August 1980.
Department of Defense, Seabasing Joint Integrating Concept, 1 August 2005.
Griffin, Gary B., The Directed Telescope: A Traditional Element of Effective Command, Combat
Studies Institute, U.S. Army Command and General Staff College, July 1991.
JCS J-8, SPG-Directed Planning Task: Integrated Architectures, briefing, July 2004.
Joint Requirements Oversight Council, JROCM 062-06, Modifications to the Operation of the Joint
Capabilities Integration and Development System, 17 April 2006.
Joint Requirements Oversight Council, JROCM 199-03, Joint Forcible Entry Operations Study
(PDM II), 20 October 2003.
Kirkwood, Craig W., Strategic Decision Making: Multiobjective Decision Analysis with
Spreadsheets, Duxbury Press, 1997.
Loerch, Andrew, and Larry Rainey (ed), Methods for Conducting Military Operational Analysis,
Military Operations Research Society, to appear April 2007.
Office of Aerospace Studies (OAS/DR), Analysis Handbook: A Guide for Performing Analysis
Studies for Analyses of Alternatives or Functional Solution Analyses, Air Force Materiel Command,
July 2004.
Pirnie, Bruce, and Sam Gardiner, An Objectives-Based Approach to Military Campaign Analysis,
RAND National Research Defense Institute, 1996.
73
Ryan, Paul B., The Iranian Rescue Mission: Why It Failed, Naval Institute Press, Annapolis, 1985.
Secretary of Defense, Operational Availability (OA)-05 / Joint Capability Areas, memo, 6 May
2005.
Secretary of Defense, Requirements System, memo, 18 March 2002
Secretary of Defense, National Defense Strategy of the United States of America, March 2005.
Tufte, Edward R, The Cognitive Style of PowerPoint, Graphics Press LLC, Cheshire, 2003.
Washburn, Alan R., Bits, Bangs, or Bucks?: the Coming Information Crisis, PHALANX, Vol. 34,
No. 3, September 2001.
74
12.
List of Acronyms
76
13.
From 28 August to 28 September 2006, a small team conducted a Quick Turn CBA on DOD
biometrics (the measurable physical and behavioral characteristics that allow an individual to be
identified). We offer a brief description of this CBA as an example of how a Quick Look effort was
conducted, particularly with respect to the need to design to time.
13.1. Background
The DOD has long had some biometrics capabilities, but the events of September 2001 greatly
increased the need to be able to accurately identify individuals. At that time, the Secretary of
the Army was designated to lead, consolidate, and coordinate all biometric information
assurance programs in the DOD. In addition, ASD(NII), the Assistant Secretary of Defense for
Networks and Information Integration, was given significant responsibility, since biometrics
was viewed as part of the overall information assurance program.
However, the need for biometrics capabilities was further accelerated by Operation IRAQI
FREEDOM. While the Army had stood up a Biometrics Task Force (BTF) to address these
issues, many senior leaders believed that something more had to be done. US Central
Command had submitted two Joint Urgent Operational Needs requests for biometrics
capabilities (one for base access biometrics and another for better distribution of biometrics
information) in the summer of 2005, but progress had been slow.
In March 2006, the Army presented a briefing on the biometrics program to senior group
including the Vice Chairman of the Joint Chiefs of Staff (VCJCS). As a result, the VJCJS
directed that a team be formed to write a DOD biometrics CONOPS, and that a separate Tiger
Team review the architecture of biometrics data flow in the CENTCOM theater [DAMO-ZA,
2006]. The Tiger Team visited a number of sites in Iraq in March-April 2006 and reported the
following recurring themes.
Units see the inherent value of biometrics and have adapted biometrics at all echelons.
Lack of user feedback on the biometric information collected, otherwise known as the
so what. The user requires the acknowledgement that data collected was received,
processed, and a report sent back to the collecting organization. The user requires a
rapid response capability at the point of collection to enable missions on the battlefield.
In addition, the Under Secretary of Defense for Acquisition, Technology, and Logistics
commissioned a Defense Science Board Task Force on the subject in April 2006. His
memorandum noted that
The Department of Defense (DoD) created a biometrics management approach defined
by pre-9-11 documentation. As a result, all activities in the post-9-11 period are reactive
with ad hoc resources and management teams responding to warfighter applications that
attempt to leverage emerging developments in biometric technologies. Today, DoD must
77
develop options for the Fall 2006 program and budget review; and
provide a foundation for a more detailed assessment and formal JCIDS needs
documents.
He also established that this team would report to the JROC on 28 September, giving them
approximately 30 days to complete the assessment and present results to any lower-level
decision bodies.
78
The team eventually evolved to a group of 18 people, about half of whom worked on the
assessment 90% of the time. The rest attended the sessions 15%-60% of the time; also the team
spent time with representatives from other organizations, such FBI, US Coast Guard, and the
Department of Homeland Security. The study lead made an early decision to limit participation
on the team to at most one person per organization (not including J34) to ensure that debates
were not unbalanced by force of numbers. Overall, each session averaged around 10 people,
which the team found to be workable.
79
We will discuss the details of how the team determined the use cases and the structure for
soliciting needs in Section 12.5. It is worth noting, however, that J-8 had solicited for
information on existing biometrics initiatives in mid-June [Chanik, 2006], but the team did not
find this information to be useful, and ended up recollecting most of it.
So, by the end of the first week, the team had a methodology, a presentation plan, a tentative
schedule, and the products in place for a data call; in addition, they had collected most of the
available reports on the topic. The formal request for information went out on 1 September,
with a deadline of 7 September [J-3, 2006].
Forming. The team did not know each other when they first met, and had to rely on the
CBA lead for background, objectives, and methods. Several of the members, who came
from organizations with substantive stakes in the results, challenged the initial
objectives, scope, and operating rules.
Storming. This stage manifested itself as the team began determining the use cases.
While the team had at this point agreed on its purpose and objectives, there was
considerable debate and a number of power struggles within the group. The team lead
had to intervene often to move the group forward.
Norming. By the end of the week, the team had become functional enough to produce a
data call that was coherent and could be distributed to external organizations, and had
achieved a degree of unity with respect to what was going to be done and how.
Performing. In Tuckmans performing stage, the team reaches the point where tasks
can be divided among subgroups and disagreements resolved without team lead
intervention. It does not appear that this group really reached this stage in the first
week, as they continued to operate as a committee of the whole until they began to cost
alternatives.
J34 did provide one officer to function as sort of a scribe for the assessment. This person was
largely responsible for capturing the discussions and developing briefings. Eventually, the J34
representatives on the team adopted the habit of spending some time summarizing what had
gone on after the rest of the team left for the day.
Also, the VCJCSs high interest in the assessment led him to put a member of his own staff
full-time in the working group, so he had regular information on the groups progress. This
arrangement had both strengths and weaknesses; while there were some collisions between the
VCJCS representative and the formal study lead, the representative facilitated very quick
resolution of things such as data requests, and was able to get immediate feedback from very
high levels.
Another issue the team had to confront was that the OSD Program Analysis and Evaluation
(PA&E) was getting ready for the fall program review and felt they could not provide a
representative to the CBA. Unfortunately, PA&E was a critical organization, because they
execute the program review and would ultimately manage the recommendations of the issue
paper the CBA was supposed to produce. Consequently, the study lead opted to brief PA&E on
80
a weekly basis (normally during the lunch hour) to keep them informed of the progress of the
effort. By all accounts, this approach worked well.
This is not to say that the assessment was not contentious. The study team lead had to simply
shut off debate and force decisions many times, and various team members had to walk out of
heated discussions occasionally to gather themselves.
locate, ID, and track persons of interest (raids and high-value targets);
manage local populations (internment, resettlement, vetting for positions and benefits,
and border and checkpoint security).
The team next had to assemble a format to solicit desired capabilities in the data call. For this,
the team used the overarching functions in the draft CONOPS, which were:
share collected biometric information and results (e.g., where and when matches
occurred); and
81
Use Case
Category
Subcategory
MIO/EMIO
Locate, ID, Track Persons Raid
of Interest (during tactical
Global tracking of HVT
ops)
Local tracking of HVT
Rescue/recovery ops
CONUS Base Access
OCONUS (non FOB) Base Access
Control Physical Access
FOB Base Access
Facilities/Area Access
Refugee Management
Identify Friendly Personnel
Manage Local
Populations
Counter IED
Forensics
Detainee Ops
Source Management
Check Points
Verify identity of local population
for pay, benefits
Green
Gray
Blue
Support to First
Responder
Table A.1. Candidate biometrics use cases, with categories and subcategories.
The team also subdivided these functions to allow for more detailed input; for example, the
collect function was subdivided into the proportion of time the collection could fail, collection
modality (e.g., face, iris, fingerprints), mobility of the collection system required, and time to
collect.
Once the data call went out on 1 September, the team turned to the identification of existing
capabilities, policies, and legal constraints (step 3). They spent about three days on this step,
and used the same spreadsheets they had sent out in the data call to record all the fielded
capabilities. Since all of the DOD biometrics systems were all off-the-shelf systems and there
was no integrated program of record, the team had to rely on the information built up by other
groups such as the Tiger Team, the Armys BTF organization, and other draft JCIDS
documents. The team also met with various interagency organizations, such as the FBI, to
document their current capabilities in the biometrics area.
After collecting and documenting the current capabilities, the team began turning the input
from the data call into a set of capability gaps, and also began compiling alternatives (step 4).
82
This activity, which took about seven days, was the most intense part of the assessment, and
probably completed the norming of the CBA team.
To screen the large amount of input that was arriving from the data call, the team used the
familiar red-amber-green system of classifying shortcomings. As a result, the team opted to
minimize work on the collect function, because the largest and most glaring performance gaps
were in the other areas. Also, a seemingly minor language problem caused some rework in the
data call. Under the store function, one of the subdivisions was called reliability, which,
unfortunately, was interpreted in several different ways. After some struggling, the team
renamed this subdivision to data confidence, which better reflected the metric of interest.
Unfortunately, the team discovered that the structure they had opted for resulted in 18
capability gap areas. In addition to being unwieldy, this structure did not allow for a
straightforward matching of solution alternatives to gaps, since multiple gaps could be
addressed by an alternative (and vice versa). Finally, while the red-amber-green mechanism
provided quick visual evidence of where the bulk of the issues were within a use case and a
function, the team did not have any sense of how important the problems were among use cases
and functions.
After some debate, the team settled on a compact list of capability gaps, and decided to use a
modified version of the Analytic Hierarchy Process (which one of the team members had
implemented in a spreadsheet) to order the gaps and get some sense of where to focus their
efforts. This effort led to the interim results shown in Table A.2.
Table A.2. The pairwise comparison of the revised set of capability gaps.
The team employed this more as a clustering device than a weighting method. By limiting the
row and column comparison to only six ratios, the team used this as a means to divide the gaps
into groups. Fortunately, there was little variation among use cases; although the team specified
three difference sets of situations, the shortcomings had similar scores among the cases.
The team had also begun collecting solution alternatives, and found it easier to group them by
the following capability categories:
83
database confidence.
The team further decomposed the alternatives by whether they addressed doctrine,
organization, training, materiel, leadership, personnel, or facilities (DOTMLPF), since that
deconstruction would be necessary to determine what implementation steps would be necessary
if the alternative was adopted.
Interestingly enough, there was not a great deal of competition among solution alternatives.
Due to the time compression, organizations that could have offered alternatives did not offer
irrelevant options. Also, the fact was that biometrics was really not a part of the core culture of
any part of the DOD, so there were not that many factions that had any solutions to offer.
At this point, the team was ready to move to identifying costs and risks of alternatives (step 5),
but the move to a new solution taxonomy forced them to regroup the gaps into that taxonomy
and reassess priorities among the newly-regrouped gaps. The team employed a different
spreadsheet tool to aid in this assessment; this one exploited some of the metrics and tasks that
had come back from the data call, and allowed the team to have more operationally-focused
discussions on what really needed to be done. This resulted in the following priority list:
1. match time;
2. match information;
3. share; and
4. match scope.
At this point, the team had to disentangle the confusion caused by the reliability label in the
data call. In various sessions, the Combatant Commands pressed the view that database
confidence was absolutely essential; without decent information to match against, the fastest,
most complete, totally accessible, and widest-range biometrics solution was useless.
Essentially, database confidence became not just the top priority, but a prerequisite.
Once again, the team discovered that the labels they were using were not really helping to
convey either the shortcomings or the alternatives. Consequently, their briefings were modified
to present gaps, alternatives, and courses of action in terms of following set of prioritized
descriptors:
1. match fast;
2. match accurate;
3. complete intel analysis; and
4. share.
Database confidence was dropped as a separate category (since as a prerequisite it had to be
addressed), and solutions binned to this category were spread among the match fast and match
accurate categories.
With the taxonomy finally settled, the team undertook one more ranking exercise, which was
ordering the specific gaps under each of the categories so they could begin evaluating solutions,
costs, and solution portfolios. This was done via a simple ranking process; an example for the
Share category is shown in Table A.3.
84
Share
Priority
Time
Frame
Near
DOTMLPF
Materiel
Materiel
Doctrine
Near &
Long
Near &
Long
Long
Near
Doctrine
1
2
3
Materiel
Capability Gap
Table A.3. The top five capability gaps for the Share category.
Having spent four days on this effort, the team moved to the final step of their methodology,
which was to recommend courses of action (COAs; in this document, we call these portfolios).
The team had been able to agree on the best alternative for each particular category and gap, but
there was considerable debate on how to structure alternatives. One natural scheme considered
was to simply recommend funding in order of priority, i.e., COA 1 would be to fund all of the
match fast solutions, COA 2 would be to add funding for match accurate, and so on. This
method, however, would result in unbalanced COAs unless everything was funded.
Consequently, the team decided to organize their recommendations incrementally as shown
below.
Increment 3: provide the end user with in-depth analysis enabled by biometrics.
This increment contained the remaining solutions, and would require an additional
$28M.
The team spent roughly three days on this step, and then embarked on presentation refinement
and socializing the program review issue paper.
During the presentations, the team received two reviews. The first was provided by the Armys
G-8 staff, which went over the cost estimates for the alternatives and provided a solid external
review. The second review was a formal Red Team effort conducted by two general officers
from J-8 and several O-6s. Although the study lead had not requested such a review, he
accepted the offer for such a review, and felt it greatly improved the final product. In particular,
the questions posed by the Red Team better exposed the rationale for the teams conclusions
and strengthened the case for the recommended alternatives.
The team gave its first briefing on 19 September, and presented their results and
recommendations to the JROC on 28 September as required by the original tasking. The JROC
endorsed their findings and recommended funding all three increments in 2007 and 2008.
85
13.6. Observations
This assessment followed many of the ideas in this paper. The study teams methodology
specified operational cases, used a functional taxonomy, estimated where the crucial gaps were
among the functions, considered the full spectrum of materiel and non-materiel alternatives,
considered costs, and presented alternative solution portfolios. When you consider that the
group went from its introductory meeting to briefing near-final results recommending over
$300M in initiatives in 21 days, it is clear that this effort was far from easy or routine and that
very few ad hoc groups could have done it.
The Biometrics Quick Turn CBA also typified many reasons for an accelerated assessment: the
need to take imminent budget action, the need to address an emerging need, and the need to
pull together a set of disparate examinations. As noted above, the DOD was simply not treating
biometrics as a core competency, so there was also a need to break a bureaucratic logjam.
One thing the CBA team benefited from was the fact that the VCJCS provided a realizable
scope that did not require extensive renegotiation. Indeed, the stipulation that the JROC would
be briefed 30 days after the assessment started probably swept aside the majority of the
bureaucratic hurdles that would normally plague a CBA.
The team appears to have come together quickly and operated effectively, particularly at the
end. Although it appears that the team was assembled more to provide organizational
representation than functional coverage, it had the essential doctrinal knowledge and enough
process and quantitative skills to do the assessment. While there were no quantitative methods
used beyond rudimentary decision analysis, the methods that were used lent structure to the
assessment, helped the team focus its efforts, and also provide a means to settle debates.
This CBA was also a perfect example of a design-to-time exercise. Everything that was done
was done to meet a schedule, and the various ranking exercises were designed to remove lessimportant considerations and focus on a handful of critical shortcomings. Also, the recognition
that 25-30% of the available time would have to be dedicated to briefings and repackaging was
an important concession to an unpleasant, but unavoidable, reality.
One thing that may strike the reader as inefficient were the repeated restructurings of the
capability gaps and solutions. While this probably could have been done better, we must again
point out that this assessment was done for a mission area with no approved doctrine. If the
draft CONOPS had not been available, the study team could have easily spent the entire 30
days trying to agree on a workable functional structure for the assessment.
Certainly, the Biometrics Quick Turn CBA did not contain all the analysis that this paper
recommends, and no one would offer it as an exemplar of a comprehensive quantitative study.
But, that is not what the VCJCS asked for. Instead, he asked for a short-term assessment to
generate solution options for an emerging mission area with well-documented shortfalls. In
fact, it is likely (at the time this was written) that the DOD will commission a much more
comprehensive CBA on biometrics, one that would do all the things described in this guide.
Nonetheless, this CBA presented alternatives linked to operational situations, used a coherent
functional structure, identified the most important gaps, and suggested multiple portfolios that
contained a spectrum of materiel and non-materiel solutions precisely what this guide does
recommend.
13.7. References
DAMO-ZA, Coordination of Capstone Concept of Operations (CONOPS) for Department of
Defense (DOD) Biometrics in Support of Identity Superiority, memorandum, July 17, 2006.
86
Krieg, Kenneth, Terms of Reference Defense Science Board Task Force on Defense
Biometrics Program, memorandum, 13 April 2006.
Chanik, Evan M., Identification of Service Biometric Activities, memorandum, 8 June 2006.
JCS J34, Quick Look Capability Based Assessment Call To COCOMS/Services Input, Joint
Staff Action Package J-3A 01333-06, 1 September 2006.
Tuckman, Bruce, Development Sequence in Small Groups, Psychological Bulletin, Vol. 63,
pp. 384-399, 1965.
Biometrics Tiger Team, Biometrics Tiger Team Trip Report: 23 April 5 May 2006,
Department of Defense, 28 June 2006.
OSD DA&M email, 20 July 2006, cited in Joint Staff Action Package J-3A 00902-06,
DepSecDef Biometrics Tasking Memorandum, Joint Staff.
87