Nasa Schedule Management Hand Book Pag 166
Nasa Schedule Management Hand Book Pag 166
Nasa Schedule Management Hand Book Pag 166
NASA
SCHEDULE
MANAGEMENT
HANDBOOK
1
4.3.1.2.3 Create the Milestone Registry .................................................................................. 47
4.3.1.2.4 Identify the Activity Attributes ................................................................................. 48
4.3.1.3 For Inclusion in the SMP ................................................................................................. 49
4.3.2 Create the Assessment and Analysis Plan........................................................................... 50
4.3.2.1 Collect the Requirements for Schedule Assessment ...................................................... 50
4.3.2.2 Collect the Requirements for Schedule Analysis ............................................................ 52
4.3.2.3 For Inclusion in the SMP ................................................................................................. 54
4.3.3 Create the Schedule Maintenance and Control Plan .......................................................... 54
4.3.3.1 Collect the Requirements for the Schedule Maintenance Sub-Function ....................... 54
4.3.3.2 Collect the Requirements for the Schedule Control Sub-Function ................................. 55
4.3.3.3 For Inclusion in the SMP ................................................................................................. 58
4.3.4 Create the Schedule Documentation and Communication Plan ........................................ 58
4.3.4.1 Collect the Requirements for the Schedule Documentation and Communication Plan 58
4.3.4.2 For Inclusion in the SMP ................................................................................................. 60
4.4 Skills and Competencies Required for Schedule Management Planning ................................... 60
5 Schedule Development ....................................................................................................................... 60
5.1 Best Practices .............................................................................................................................. 61
5.2 Prerequisites ............................................................................................................................... 62
5.3 Understand the P/p Scope .......................................................................................................... 63
5.3.1 Work Breakdown Structure (WBS) ..................................................................................... 63
5.3.2 Integrated Master Plan (IMP) ............................................................................................. 65
5.3.3 Organizational Breakdown Structure (OBS)........................................................................ 67
5.3.4 Cost Breakdown Structure (CBS)......................................................................................... 68
5.3.5 Integrated Master Schedule (IMS) ...................................................................................... 72
5.4 Develop the Basis of Estimate..................................................................................................... 72
5.4.1 BoE Maturity by P/p Phase ................................................................................................. 73
5.4.2 Source Information to Inform the BoE................................................................................ 75
5.4.3 Documenting the BoE in Conjunction with Developing the IMS ........................................ 77
5.5 Develop the Schedule ................................................................................................................. 77
5.5.1 Ensure Scheduling Tool Capabilities ................................................................................... 78
5.5.2 Establish Schedule Field Codes ........................................................................................... 80
5.5.3 Implement Scheduling Method .......................................................................................... 85
2
5.5.4 Determine Schedule Hierarchy ........................................................................................... 86
5.5.5 Determine Activity Naming Convention ............................................................................. 87
5.5.6 Capture All Scope ................................................................................................................ 87
5.5.7 Develop Schedule Detail ..................................................................................................... 90
5.5.7.1 Define Work Activities and Milestones ........................................................................... 90
5.5.7.2 Define Level of Effort Activities ....................................................................................... 91
5.5.7.3 Consider Schedule Size and Granularity ......................................................................... 91
5.5.8 Logically Link the Activities ................................................................................................. 94
5.5.8.1 Assign Activity Dependencies ......................................................................................... 94
5.5.8.2 Avoid Leads and Lags .................................................................................................... 100
5.5.8.3 Minimize Activity Constraints ....................................................................................... 101
5.5.9 Estimate Activity Durations............................................................................................... 104
5.5.9.1 Define Time Units.......................................................................................................... 104
5.5.9.2 Define Calendars ........................................................................................................... 105
5.5.9.3 Derive and Assign Activity Durations ............................................................................ 106
5.5.9.3.1 Schedule Estimating Methods and Schedule Estimating Relationships (SERs) ...... 107
5.5.9.3.2 Schedule Estimating Databases, Models, and Tools ............................................... 109
5.5.9.3.3 Activity Durations.................................................................................................... 112
5.5.10 Identify the Critical Path(s) ............................................................................................... 114
5.5.11 Establish and Allocate Margin ........................................................................................... 116
5.5.11.1 Key Guidelines for Incorporating Margin .................................................................. 118
5.5.11.2 “How Much” Margin to Establish.............................................................................. 118
5.5.11.3 “Where” to Allocate Margin ..................................................................................... 120
5.5.11.4 Consider Budget for the Eventual Use of Margin ..................................................... 122
5.5.12 Perform Resource or Cost Loading ................................................................................... 123
5.5.12.1 Resource Loading ...................................................................................................... 124
5.5.12.1.1 Level Resources..................................................................................................... 125
5.5.12.1.2 Assign Resource Rates .......................................................................................... 126
5.5.12.1.3 Level Resources..................................................................................................... 128
5.5.12.2 Cost Loading .............................................................................................................. 131
5.5.12.3 Budget Loading ......................................................................................................... 132
5.5.13 Time-phase the Schedule to Align with the Availability of Funding ................................. 133
3
5.5.14 Map Risks to the Schedule ................................................................................................ 135
5.6 Develop the Schedule Outputs ................................................................................................. 136
5.6.1 Integrated Master Schedule (IMS) .................................................................................... 137
5.6.2 Summary Schedule............................................................................................................ 145
5.6.3 Analysis Schedule .............................................................................................................. 146
5.6.4 Schedule Performance Measures/Reports ....................................................................... 147
5.7 Schedule Development Summary............................................................................................. 148
5.8 Skills and Competencies Required for Schedule Development ................................................ 149
6 Schedule Assessment and Analysis ................................................................................................... 149
6.1 Best Practices ............................................................................................................................ 154
6.2 Assess the Schedule .................................................................................................................. 157
6.2.1 Prerequisites ..................................................................................................................... 157
6.2.2 Perform Schedule Assessment.......................................................................................... 158
6.2.2.1 1st Tier Assessment Procedures .................................................................................... 160
6.2.2.1.1 Procedure 1. Requirements Check ........................................................................ 160
6.2.2.1.2 Procedure 2. Health Check..................................................................................... 164
6.2.2.1.3 Procedure 3. Risk Identification & Mapping Check ............................................... 170
6.2.2.2 2nd Tier Assessment Procedures ................................................................................... 173
6.2.2.2.1 Procedure 4. Critical Path (and Driving Path) and Structural Check ...................... 173
6.2.2.2.2 Procedure 5. Basis Check ....................................................................................... 183
6.2.2.2.3 Procedure 6. Resource Integration Assessment .................................................... 185
6.2.3 Exit Criteria for the Assessment Sub-function .................................................................. 185
6.3 Analyze the Schedule ................................................................................................................ 186
6.3.1 Prerequisites ..................................................................................................................... 186
6.3.2 Perform Schedule Analysis................................................................................................ 187
6.3.2.1 Ensure SRA Tool Capabilities ......................................................................................... 190
6.3.2.2 Collect Data ................................................................................................................... 192
6.3.2.2.1 Procedure 1. Collect Schedule Data and Ensure Suitability for Analysis ............... 193
6.3.2.2.2 Procedure 2. Collect Uncertainty and Risk Data and Ensure Suitability for Analysis
193
6.3.2.2.3 Procedure 3. Collect Cost Data and Ensure Suitability for Analysis ....................... 195
6.3.2.2.4 Procedure 4. Collect Schedule and Cost Performance Data .................................. 196
6.3.2.3 Build the SRA Model ..................................................................................................... 197
4
6.3.2.3.1 Procedure 1. Setup the SRA Model ........................................................................ 198
6.3.2.3.2 Procedure 2. Develop and Load the Schedule Duration Uncertainty Parameters 203
6.3.2.3.3 Procedure 3. Apply Schedule Correlation .............................................................. 211
6.3.2.3.4 Procedure 4. Develop and Load the Discrete Risk Parameters Impacting Schedule
214
6.3.2.3.5 Procedure 5. Map Schedule Risks to Relevant Activities ....................................... 221
6.3.2.3.6 Procedure 6. Test and Verify the Discrete Schedule Risk Inputs ........................... 225
6.3.2.4 Build the ICSRA Model .................................................................................................. 226
6.3.2.4.1 Procedure 1. Setup the ICSRA Model .................................................................... 228
6.3.2.4.2 Procedure 2. Define and Format Costs to be Loaded into the ICSRA Model......... 232
6.3.2.4.3 Procedure 3. Map and Load the Costs ................................................................... 236
6.3.2.4.4 Procedure 4. Develop and Load the Cost Uncertainty Parameters ....................... 237
6.3.2.4.5 Procedure 5. Apply Cost Correlation ..................................................................... 239
6.3.2.4.6 Procedure 6. Define and Load the Discrete Risk Parameters Impacting Schedule 239
6.3.2.4.7 Procedure 7. Map Cost Risks to Relevant Activities .............................................. 241
6.3.2.4.8 Procedure 8. Test and Verify the Cost and Discrete Cost Risk Inputs.................... 241
6.3.2.5 Analysis Execution ......................................................................................................... 243
6.3.2.5.1 Test and Verify the Simulation Calculation ............................................................. 244
6.3.2.5.2 Interpret the Simulation Data and Statistics .......................................................... 245
6.3.2.5.3 Analyze the Simulation Results to Manage the P/p ............................................... 251
6.3.3 Exit Criteria for the Schedule Analysis Sub-function......................................................... 265
6.4 Skills and Competencies Required for Schedule Assessment and Analysis .............................. 265
7 Schedule Maintenance and Control.................................................................................................. 265
7.1 Best Practices ............................................................................................................................ 266
7.2 Prerequisites ............................................................................................................................. 267
7.3 Maintain and Control the Schedule .......................................................................................... 268
7.3.1 Procedure 1. Control – Create a Schedule Baseline......................................................... 269
7.3.1.1 Step 1. Establish the Schedule Baseline ....................................................................... 272
7.3.1.2 Step 2. Validate and Approve the Schedule Baseline .................................................. 278
7.3.2 Procedure 2. Maintenance – Update Schedule with Actual Progress ............................. 281
7.3.2.1 Routine Schedule Updates ............................................................................................ 282
7.3.2.2 As-Needed Schedule Updates ....................................................................................... 283
7.3.2.3 Steps for Updating the Schedule .................................................................................. 284
5
7.3.3 Procedure 3. Control – Measure Performance and Monitor Trends ............................... 285
7.3.3.1 Step 1. Measure Deterministic Performance and Monitor Trends ............................. 287
7.3.3.1.1 Activity/Milestone Variances and Schedule Variance (SV) ..................................... 288
7.3.3.1.2 Activity/Milestone Performance Trends................................................................. 293
7.3.3.1.3 Baseline Execution Index, Current Execution Index, and Hit or Miss Index ........... 294
7.3.3.1.4 Schedule Performance Index (SPI), Time-based SPI (SPIt), and Earned Schedule (ES)
299
7.3.3.1.5 Critical Path Length Index (CPLI) ............................................................................. 302
7.3.3.1.6 Margin Consumption .............................................................................................. 304
7.3.3.1.7 Float Erosion, Total Float Consumption Index (TFCI), Predicted Critical Path Total
Float (CPTF) ............................................................................................................................... 309
7.3.3.2 Step 2. Measure Stochastic Performance and Monitor Trends .................................... 313
7.3.3.2.1 Probability of On-time Delivery of Critical Items .................................................... 313
7.3.3.2.2 Risk-based Completion Trend ................................................................................. 315
7.3.3.2.3 Sufficiency of Margin .............................................................................................. 317
7.3.3.2.4 Risk-based Tracking against the MA and ABC......................................................... 319
7.3.4 Procedure 4. Control – Determine Corrective Action or Retention Rationale ................ 320
7.3.4.1 Decision 1. Threshold Exceeded? ................................................................................. 324
7.3.4.2 Action 1. Watch ............................................................................................................ 324
7.3.4.3 Decision 2. Retain or Replan? ...................................................................................... 325
7.3.4.4 Action 2. Retain ............................................................................................................ 325
7.3.4.5 Action 3. Replan ........................................................................................................... 325
7.3.4.6 Decision 3. Rebaseline? ................................................................................................ 326
7.3.4.7 Action 4. Rebaseline ..................................................................................................... 327
7.3.5 Procedure 5. Maintenance – Update Schedule Database with Corrective Actions......... 327
7.3.5.1 Complete Schedule Baseline Update Method .............................................................. 329
7.3.5.2 Baseline Control Milestone Update Method ................................................................ 331
7.3.5.3 P/p Element Baseline Method ...................................................................................... 332
7.3.5.4 Contractor’s Schedule Baseline Control Process Update Method ............................... 332
7.3.5.5 Annual PPBE Schedule Baseline Reset Method ............................................................ 332
7.3.5.6 Schedule Margin Maintenance ..................................................................................... 332
7.4 Exit Criteria ................................................................................................................................ 337
7.5 Skills and Competencies Required for Schedule Maintenance and Control ............................ 338
6
8 Schedule Documentation and Communication ................................................................................ 338
8.1 Best Practices ............................................................................................................................ 340
8.2 Prerequisites ............................................................................................................................. 341
8.3 Document and Communicate the Schedule ............................................................................. 341
8.3.1 Configuration Management and Data Management (CM/DM) for Schedule Management
342
8.3.2 Reporting........................................................................................................................... 344
8.3.2.1 Communication Strategies ............................................................................................ 345
8.3.2.2 Data Interface Tools and Techniques............................................................................ 346
8.3.2.3 P/p Reporting ................................................................................................................ 348
8.3.2.3.1 Reporting for Internal Reviews ............................................................................... 348
8.3.2.3.2 Reporting in Preparation for LCRs/KDPs ................................................................. 349
8.3.2.3.3 Reporting Responses to Findings, Recommendations, and Actions from LCRs/KDPs
350
8.3.2.4 Report Types and Formats ............................................................................................ 352
8.3.2.4.1 Status Reporting: Where the Schedule Now Stands (Actual Data)........................ 353
Milestone Status Report............................................................................................................ 355
8.3.2.4.2 Progress Reporting: What has been Accomplished (Plan vs. Actual Data)............ 369
8.3.2.4.3 Forecast Reporting: Prediction of Future Status and Progress (Projections) ......... 387
8.3.3 Schedule Information and Knowledge Capture ................................................................ 390
8.3.3.1.1 Informal Backups .................................................................................................... 390
8.3.3.1.2 Formal Archives ...................................................................................................... 391
8.4 Skills and Competencies Required for Schedule Documentation and Communication ........... 396
9 Appendices and Supporting Information .......................................................................................... 396
9.1 Acronyms .................................................................................................................................. 396
7
1 Preface
1.1 Purpose
Schedule Management is an integral part of Program and project (P/p) management, that when
effectively performed helps safeguard P/p success. The purpose of Schedule Management is to provide
the framework for coordinating, communicating, time phasing, and resource planning the necessary
tasks within a work effort in order to manage and optimize the available resources and deliver products
on time and within budget. Agency-level NASA Procedural Requirements (NPRs) and NASA Policy
Documents (NPDs) referenced throughout this handbook provide the basis of practice for Schedule
Management across the Agency. However, the scope of NASA P/p — from research into new ways to
extend our vision into space, to designing a new crew vehicle, or exploring the outer reaches of our solar
system — is vast. This handbook emphasizes Schedule Management based on P/p life cycles (and Key
Decision Points), taking a more detailed look at the principles and best practices associated with
effectively implementing high-level requirements and acknowledging the differing levels of complexity
and other nuances that exist among NASA’s varied set of Programs and projects.
The intent of this handbook is to support the Schedule Management function by providing best practices
proven to be successful within the Agency, which will enable continuous improvements Agency-wide
that enhance programmatic processes, products, and professional growth (i.e., capabilities). As such,
the necessity of this handbook to establish consistent, Agency-wide best practices is threefold:
• The NASA Schedule Management Handbook is necessary to capture recommended schedule
management processes, methodologies, and techniques based on NASA-specific needs and
lessons learned.
• The NASA Schedule Management Handbook is necessary to define evolving Schedule
Management products to be developed during each life cycle phase in accordance with the
NASA requirements and policies.
This handbook also provides recommended practices which are considered supplementary, but
secondary, to the Schedule Management best practices.
1.2 Applicability
Schedule Management supports P/p management as a whole and is identified as one of the key
functions that aids in decision making in NASA’s Project Planning and Control (PP&C) paradigm.1 This
handbook provides Schedule Management guidance for NASA Headquarters, NASA Centers, the Jet
Propulsion Laboratory, inter-government partners, academic institutions, international partners, and
1 SP-2016-3424. NASA Project Planning and Control Handbook. Chapter 3.4: Scheduling Function. September 16, 2016.
8
contractors to the extent specified in the contract or agreement. The authors of this handbook have
engaged with personnel associated with the NASA Schedule Community of Practice (SCoPe)2 in order to
capture the best practices of P/p Schedule Management throughout the Agency, thereby providing
informed and relevant guidance, as well as continuity of the practices utilized by Agency’s expert
knowledge base. Providing consistency in guidance for Schedule Management through this handbook
supports an efficient and effective decision-making process for NASA management. To facilitate general
understanding of the contents of this handbook, supplemental information will be included on the
SCoPe website, including a Programmatic Acronym List and a Programmatic Glossary.3
The Schedule Management guidance described in this handbook helps to ensure that NASA P/ps are
meeting expectations of both internal and external stakeholders. The Government Accountability Office
(GAO), which performs routine audits of government agencies, explains this concept as follows:
“A well-planned schedule is a fundamental management tool that can help government P/ps
use public funds effectively by specifying when work will be performed in the future and
measuring program performance against an approved plan. Moreover, as a model of time, an
integrated and reliable schedule can show when major events are expected as well as the
completion dates for all activities leading up to them, which can help determine if the P/p’s
parameters are realistic and achievable.”4
Internally, best practice processes and products support not only day-to-day PP&C but also the
Independent Assessment (IA) function, which helps to gauge P/p achievability. The alignment of Agency
assessment guidelines to Schedule Management best practices, streamlines the life cycle review (LCR)
process and provides consistent understanding between NASA’s advocate and non-advocate roles.5 The
NASA Office of Inspector General (OIG) also relies on an understanding of NASA’s best practices when
performing audits that include assessments of P/p schedule estimates. With respect to external
stakeholder expectations, this handbook specifies how and when the Schedule Management function
supports commitments made to Congress, for example. In addition, the principles, processes, and best
practices in this handbook ensure that P/ps are prepared for external audits.
1.3 Authority
As directed by the Associate Administrator, the Office of the Chief Financial Officer (OCFO) is the owner
of programmatic standards and policies, as well as the steward of programmatic capabilities, where the
2 The NASA Schedule Community of Practice (SCoPe) is an Agency-level community of practice sponsored by the Office of the
Chief Financial Officer’s (OCFO) Strategic Investment Division and supporting Agency Programmatic Analysis Capability (APAC)
Leadership, given their role of ownership of programmatic standards and policies and stewardship of programmatic
competency. SCoPe membership includes both civil servant and contractor schedule management support from NASA
Headquarters and all NASA Centers, including component facilities.
3 SCoPe website, https://community.max.gov/x/9rjRYg
4 GAO-16-89G. GAO Schedule Assessment Guide. Page 1. December 2015. http://www.gao.gov/assets/680/674404.pdf
5 OCFO-SID-0002. NASA Standard Operating Procedure Instruction (SOPI) 6.0. Release Date: May 23, 2017.
https://www.nasa.gov/sites/default/files/atoms/files/sopi_6.0_final.pdf
9
term “programmatic” in this handbook refers to program management, resource analysis, scheduling6,
cost estimation, and independent assessment activities.7 This responsibility includes interpretation of
NASA requirements and policy guidance listed below:
NPD 1000.0A, NASA Governance and Strategic Management Handbook
NPD 7120.4C, Program/Project Management
NPD 1000.5, Policy for NASA Acquisition
NPR 7120.5, NASA Space Flight Program and Project Management Requirements
NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project
Requirements
NPR 7120.8, NASA Research and Technology Program and Project Management Requirements
Figure 1-1 shows the traceability of requirements to the best practices provided in this handbook, by
depicting the flow-down from the highest Agency Policy Directives and Procedural Requirements
documents to the PP&C competencies. The remainder of this handbook defines the recommended best
practices for fulfilling the Schedule Management requirements set forth in NPR 7120.5, NPR 7120.7, and
NPR 7120.8.
6 Per this handbook, “Scheduling” as defined by the Associate Administrator will be taken to mean “Schedule Management.”
PP&C guidance will be updated to reflect the change as well, renaming the function from “Scheduling” to “Schedule
Management”.
7 On October 22, 2015, the NASA Associate Administrator, defined the Agency’s Programmatic Capability as consisting of
program management, resource analysis, scheduling, cost estimation, and independent assessment activities. The OCFO
assumed the role of Agency Programmatic Analysis Capability (APAC) Leadership to provide two critical functions:
Programmatic Standards and Policy Owner and Programmatic Capability Steward.
10
Figure 1-1. Agency requirements flow down to this Schedule Management Handbook.
This handbook will be updated as needed to enhance Schedule Management, to be more effective and
efficient, across the Agency. It is acknowledged that most, if not all, external organizations participating
in NASA P/ps will have their own “corporate” Schedule Management policy, procedures, and guidance.
Issues that arise from conflicting schedule guidance will be resolved on a case-by-case basis as contracts
and partnering relationships are established. It is also acknowledged and understood that all P/ps are
not the same and may require different levels of schedule visibility, scrutiny, and control. P/p type,
value, and complexity are factors that typically dictate the breadth and depth of Schedule Management
practices employed.
11
1.4 References
The following are related NPRs and NASA-specific guidance documents, as well as other non-NASA
references used as source material for this document.
NPD 1000.0, NASA Governance and Strategic Management Handbook
NPD 7120.4, Program/Project Management
NPD 1000.5, Policy for NASA Acquisition
NPR 7120.5, NASA Space Flight Program and Project Management Requirements
NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project
Requirements
NPR 7120.8, NASA Research and Technology Program and Project Management Requirements
NPR 8000.4, Agency Risk Management Procedural Requirements
NPR 8705.4, Risk Classification for NASA Payloads
NASA/SP-2010-3404, NASA Work Breakdown Structure (WBS) Handbook
NASA, Cost Estimating Handbook V.4.0
NASA/SP-2016-3708, Earned Value Management Project Control Account Managers (P-CAM)
Reference Guide
NASA/SP-2012-599, NASA Earned Value Management (EVM) Implementation Handbook
NASA/SP-2016-3406, Integrated Baseline Review Handbook
NASA/SP-2016-3424, NASA Project Planning and Control (PP&C) Handbook
NASA/SP-2011-3422, NASA Risk Management Handbook
Academy of Program/Project & Engineering Leadership (APPEL)8
Chief Financial Officer University (CFO-U) 9
1.5 Acknowledgments
Primary points of contact include: Michele T. King, Office of the Chief Financial Officer, NASA
Headquarters and Robin K. Smith, Office of the Chief Financial Officer, NASA Headquarters. The
following individuals were active participants in the NASA Schedule Community of Practice working
group efforts and are recognized as core contributors to the content of this handbook:
Juan Atayde, NASA Kennedy Space Center
Erica Beam, Jet Propulsion Laboratory
Jose Camacho, NASA Kennedy Space Flight Center
Christopher Chromik, NASA Headquarters
Zachary Dolch, NASA Goddard Space Flight Center/Lentech Inc.
A special acknowledgement goes to the following individuals who were primary contributors to the prior
iterations of the Schedule Management Handbook and the “Scheduling Best Practices Guide” previously
created at the Marshall Space Flight Center. These documents served as the foundation for much of the
content contained in this handbook.
Jimmy W. Black, NASA Marshall Space Flight Center
Anthony R. Beaver, NASA Marshall Space Flight Center
Lynne Faith, NASA Dryden Flight Research Center
James H. Henderson, NASA Kennedy Space Center
Almond H. Kile, NASA Ames Research Center
Cheryl A. Kromis, MEI – Boeing/NASA Marshall Space Flight Center
John A McCarty, SAIC/NASA Marshall Space Flight Center
Michael W. Norris, Jacobs Engineering Group/NASA Marshall Space Flight Center
Steven O. Patterson, NASA Marshall Space Flight Center
Kenneth W. Poole, NASA Marshall Space Flight Center
Donnie E. Smith, MTS/ NASA Marshall Space Flight Center
James G. Smith, Smith & Associates LLC/NASA Marshall Space Flight Center
Anita M. Thomas, NASA Headquarters
Jeanette C. Tokaz, Jacobs Engineering Group/NASA Marshall Space Flight Center
Lynn L. Wyatt, ASRC InuTeq/NASA Goddard Space Flight Center
13
2 NASA Schedule Management: Life Cycle, Requirements, and Best
Practices
This chapter provides an introduction to key elements of NASA’s strategic framework for Schedule
Management, tying best practices to Agency P/p life cycle requirements that are established in NPR
7120.5 NASA Space Flight Program and Project Management Requirements, NPR 7120.7 NASA
Information Technology and Institutional Infrastructure Program and Project Requirements, NPR 7120.8
NASA Research and Technology Program and Project Management Requirements, and NPD 1000.5
Policy for NASA Acquisition. Subsequent chapters deal with defining and describing best practices on
how to most effectively administer and satisfy the Schedule Management life cycle requirements that
pertain to each sub-function of Schedule Management.
For most NASA P/ps, Formulation and Implementation are further divided into incremental phasing that
allows management to periodically assess P/p progress. Figure 2-1 and Figure 2-2 illustrate how the
level of schedule detail required in space flight P/ps changes with respect to the particular phase of
Formulation or Implementation; more specifically, it increases as the P/p moves through its life cycle.
The same is true for 7120.8 P/ps as shown in Figure 2-3 and Figure 2-4. These figures further illustrate
the LCRs that comprise independent reviews, which provide assessments of a P/p’s technical and
14
programmatic status and health at key points in the P/p’s life cycle. NPR 7120.5 requires the use of a
single, independent review team called the Standing Review Board (SRB) to conduct certain LCRs.10 For
smaller projects that are not governed by NPR 7120.5, LCRs may be conducted by Center-led,
independent review teams (IRTs). P/p assessments, or special reviews, may also be conducted at other
times during the P/p life cycle not specifically shown in these figures, such as when a rebaseline occurs.
This means that for effective P/p Management in support of P/p life cycle requirements, it is critical for
Schedule Management to be initiated early in P/p Formulation all the way through to Closeout.
Figure 2-1. NASA Space Flight Tightly-Coupled, Loosely-Coupled and Uncoupled Program Life Cycle Phase/Schedule Detail
Relationship.
Figure 2-2. NASA Space Flight Single-Project Program and Project Life Cycle Phase/Schedule Detail Relationship.
10 For P/p’s that adhere to NPR 7120.5, “all LCRs must assess both the program’s or project’s technical maturity and its
alignment with the Agency’s six assessment criteria identified in NPR 7120.5, NASA Space Flight Program and Project
Management Requirements, Section 2, and described in Section 5.1 of the NASA Standing Review Board Handbook.” NASA/SP-
2016-3706 REV B.
15
Figure 2-3. NASA R&T Program Life Cycle Phase/Schedule Detail Relationship.
Figure 2-4. NASA R&T Project Life Cycle Phase/Schedule Detail Relationship.
Because the scientific and exploration goals of Programs vary significantly, different Program
implementation strategies are required, ranging from very simple to very complex. To accommodate
these differences, NASA categorizes space flight Programs into four distinct types. Definitions of space
flight (7120.5) and research and technology (R&T, 7120.8) Programs and projects are as follows:
• Programs (7120.5 and 7120.8). Strategic investments by a Mission Directorate or Mission
Support Office that has a defined architecture and/or technical approach, requirements, funding
level, and a management structure that initiates and directs one or more projects. A program
defines a strategic direction that the Agency has identified as needed to accomplish Agency
goals and objectives.
o Single-Project Programs (7120.5). Programs that tend to have long development
and/or operational lifetimes, represent a large investment of Agency resources, and
have contributions from multiple organizations/agencies. These Programs frequently
combine Program and project management approaches, which they document through
tailoring.
16
o Tightly-Coupled Programs (7120.5). Programs with multiple projects that execute
portions of a mission(s). No single project is capable of implementing a complete
mission. Typically, multiple NASA Centers contribute to the Program. Individual
projects may be managed at different Centers. The Program may also include other
agency or international partner contributions.
o Loosely-Coupled Programs (7120.5). Programs that address specific objectives through
multiple space flight projects of varied scope. While each individual project has an
assigned set of mission objectives, architectural and technological synergies and
strategies that benefit the Program as a whole are explored during the Formulation
process. For instance, Mars orbiters designed for more than one Mars year in orbit are
required to carry a communication system to support present and future landers.
o Uncoupled Programs (7120.5). Programs implemented under a broad theme and/or a
common Program implementation concept, such as providing frequent flight
opportunities for cost-capped projects selected through an Announcement of
Opportunity (AO) or NASA Research Announcements. Each such project is independent
of the other projects within the Program.
• Projects (7120.5 and 7120.8). A specific investment identified in a Program Plan having defined
requirements, a life cycle cost (LCC), a beginning, and an end. A project also has a management
structure and may have interfaces to other projects, agencies, and international partners. A
project yields new or revised products that directly address NASA's strategic goals.
As with Programs, projects vary in scope and complexity and therefore require varying levels of
management requirements and Agency attention and oversight. For example, projects may consist of
primarily “in-house” work, they may be a mix of “in-house” and “out-of-house”, or contracted work, or
they may be composed of partnerships with other Agencies, universities, research institutions, or
international entities. Consequently, project categorization defines Agency expectations of PMs by
determining both the oversight council and the specific approval requirements.
Space Flight projects are Category 1, 2, or 3 and shall be assigned to a category based on guidelines
established in NPR 7120.5. Space flight projects are also assigned a risk classification of Class A, B, C, or
D, based upon on guidance per NPR 8705.4. Specific Space Flight project types include:
• In-House Observatory Projects. In-house observatory projects are those in which a NASA
Center is the system integrator. Typically, the Center develops the spacecraft, procures and/or
builds the science instruments in house, procures or develops the ground system, and integrates
and tests the observatory. In-house observatory projects may also include contributions from
other international or domestic partners, including other NASA Centers.
• Out-of-House Observatory Projects. Out-of-house observatory projects are those in which a
NASA Center manages a prime contractor who serves as the system integrator. The NASA
Center may also manage one or more non-prime instrument contractors whose instruments are
provided as government-furnished equipment (GFE) to the prime. Out-of-house observatory
projects may also include contributions from other international or domestic partners, including
other NASA Centers. The mission’s ground systems may be developed in-house, be a part of the
17
prime contractor’s scope of work or may be managed as a stand-alone special project within the
NASA Center.
• Out-of-House Flight or Ground System Projects (single prime contractor). For some flight or
ground systems, a single prime contractor is responsible for the entire flight or ground system
effort including that of their subcontractors.
• In-House Instrument/Payload Projects. In-house instrument/payload projects are those in
which the project is managed by a NASA Center, and the work is primarily performed by in-
house organizations/directorates. The instrument/payload may be delivered to the Center’s in-
house observatory project, an out-of-house observatory project, an external partner, or another
NASA Program Office or Center.
• Out-of-House Instrument/Payload Projects. Out-of-house instrument/payload projects are
those in which the project is managed by a NASA Center, and the work is primarily performed by
industry contractors or other external organizations. The instrument/payload may be delivered
to an external partner or another NASA Center or Program Office.
• In-House Component/Subsystem Projects. In-house component/subsystem projects are those
in which a NASA Center manages the development and delivery of a unique component,
subsystem, or other element as a “supplier” to an external organization such an international
partner, other NASA center, the Jet Propulsion Laboratory, another federal agency, etc.
• Special Projects. A special project is any project that cannot be classified in one of the
categories described above.
18
projects are characterized by unpredictability of outcome. Funding may be at a fixed level on a
yearly basis.
Regardless of the type, category, or class of P/p being implemented, the fundamental processes for
implementing best practices described in this handbook serve as the basis for the management of any
NASA P/p schedule according to its life cycle requirements; although, in some cases, the breadth and
depth of the implementation of the best practices may be tailorable.
2.2 Requirements
A sound, integrated, logic network-constructed schedule, developed using the Critical Path Method
(CPM), serves the basis for planning and performance, and is the primary source for all schedule data
provided to management for critical P/p decisions. This logic network schedule, or Integrated Master
Schedule (IMS), constitutes the framework for time phasing and coordinating all P/p efforts into a
master plan to ensure that objectives are accomplished within approved commitments. As such, NASA
requirements pertaining to the “schedule” are in fact written with respect to the IMS. This handbook
heavily leverages several NASA requirements documents in order to support consistency and rigor
across the NASA Schedule Management community. The requirements that dictate Schedule
Management Planning, Development, Assessment/Analysis, Maintenance/Control, and
Documentation/Communication are found in NPR 7120.5, NPR 7120.7, NPR 7120.8, NPR 7123.1, NPR
7150.2 and NPD 1000.5.
Comprehensive Schedule Management requires the establishment, utilization, and control of a schedule
baseline, or baseline IMS, and its derivative schedules. It is the responsibility of each Program or Project
Manager (PM) and P/p team to ensure that these Schedule Management requirements are adhered to,
not only during initial schedule planning and development, but also in the on-going updating,
maintenance, and control. In addition, on-going evaluation of the IMS through assessment and analysis
should be made available to the appropriate levels of management to aid in decision making, as
indicated by P/p life cycle requirements.
Requirements in NASA Procedural Requirements (NPRs), technical standards, and specifications are
identified by using the word “shall” and denote mandatory compliance by P/ps. Rationale for why the
requirement is necessary is typically available to the user in each parent/requirement-originating
document. To facilitate requirements selection and verification by NASA P/ps, a Requirements
Compliance Matrix is provided as an appendix in each NPR. The Requirements Compliance Matrix
should be used in coordination with the best practices, explanations, and guidance text in the body of
this NASA handbook to ensure the P/p is meeting the required objectives. Figure 2-5 below maps the
best practices discussed in this handbook according to the Schedule Management sub-function they
support to the NASA requirement document(s) from which they originate.
19
Schedule Management Requirements
Requirement Statement
Sub-Function Document
Figure 2-5. Mapping of Schedule Management Best Practices to NASA Requirements Documents.
20
NPR 7120.5 further defines the expected maturity of P/p products and control plans at each LCR. P/ps
are expected to have achieved these maturities, unless the requirements have been tailored and
approved. The same expectation is typically true for P/ps that fall under other requirements
documents, such as NPR 7120.7 or 7120.8, although product maturity matrices may not be defined. It is
important to note that according to NPR 7120.8, “R&T projects that directly tie to the space flight
mission’s success and schedule are normally managed under NPR 7120.5” and would therefore adhere
to standard NASA best practices as described in this handbook.
21
NASA Life Cycle Phases - 7120.5
Activity
Formulation Implementation
Uncoupled and Loosely Coupled Program
Major Life Cycle Reviews SRR
SRR SDR PIR PIR PIR PIR
Life Cycle Gates (Key Decision Points) KDP-0 KDP-I KDP-II KDP-III KDP-IV
Proj 1 Launch
Projects Proj 2 Launch
Proj n Launch
Baseline
Major Events PIR
SRR SDR PIR
Tightly Coupled Program
Major Life Cycle Reviews SRR SDR PDR CDR - SIR ORR - MRR/FRR PLAR - CERR - PFAR - PIR
Life Cycle Gates (Key Decision Points) KDP-0 KDP-I KDP-II KDP-III KDP-IV
Proj 1 Launch
Projects Proj 2 Launch
Proj n Launch
Baseline
Major Events PIR
SRR SDR PDR CDR SIR ORR
Project or Single-Project Program
Project Life Cycle Phases Pre-Phase A Phase A Phase B Phase C Phase D Phase E Phase F
Systems
Preliminary Design &
Concept Technology Final Assembly, Operation Disposal &
Major Activities Concept Study Technology
Development Design Fabrication Integration/Test, Sustainment Closeout
Completion
Launch
Major Life Cycle Reviews MCR SRR - SDR/MDR PDR CDR - SIR ORR - MRR/FRR DR DRR
Life Cycle Gates (Key Decision Points) KDP-A KDP-B KDP-C KDP-D KDP-E KDP-F
Baseline Launch
Major Events
SRR SDR/MDR PDR CDR SIR ORR
Schedule Activity
Supporting Products
Milestone Registry, Activity Attributes U/D U/D U/D U/D U/D U/D U/D U/D
Schedule Development
-Integrated Master Schedule Development Preliminary B/L U/D U/D U/D U/D U/D U/D U/D U/D
-Basis of Estimate Development Preliminary B/L U/D U/D U/D U/D U/D U/D U/D U/D
-Analysis Schedule If Needed >> Preliminary B/L U/D U/D U/D U/D U/D U/D U/D U/D
Figure 2-6. The relationship of the NASA Space Flight P/p life cycle to the Schedule Management sub-functions guides the
development of Schedule Management products, which are supported by adherence to best practices.
22
NASA Life Cycle Phases - 7120.8
Activity
Approval
Formulation Implementation
Programs
Program Life-Cycle Phases Pre-Formulation Formulation Implementation
Life Cycle Gates (Key Decision Points) ATP Program Approval Program Assessment Reviews (PARs) Closeout
Proj 1 Pre-F Formulation Implementation
Projects Proj 2 Formulation Implementation
Proj n Pre-F Formulation Implementation
Schedule Activity
Schedule Mangement Planning
-Program Plan FAD (scope) PCA Baseline (B/L) Program Plan Closeout Report
-Project Plan Scope Preliminary Project Plan B/L Project Plan Closeout Report
Scheduler Assigned
Roles Assigned
Schedule Analyst Assigned
Supporting Products
Milestone Registry, Activity Attributes Update as needed
Schedule Development Development Preliminary B/L Update as needed
-Integrated Master Schedule Development Preliminary B/L Update as needed
-Basis of Estimate
-Analysis Schedule If Needed >> Preliminary Update as needed
Figure 2-7. The relationship of the NASA Research and Technology P/p life cycle to the Schedule Management sub-functions
guides the development of Schedule Management products, which are supported by adherence to best practices.
Although the processes described in this handbook can be tailored as needed to better fit the P/p scope,
when feasible the intent of the best practice should be followed. Consistent Schedule Management
utilizing best practices and supporting the overall P/p life cycle is important to the Agency for many
reasons including, but not limited to:
• Strengthening the Agency’s Schedule Management capability
• Enhancing programmatic excellence and continual improvement of P/p management at NASA
• Increasing the quality of planning and thus P/p success
23
• Ensuring the appropriate, required maturity of Schedule Management practices and products
throughout the P/p’s life cycle
• Capturing schedule data, narrative, and lessons learned to improve the NASA programmatic
community knowledge base, as well as to provide a rationale for recommendations and
requirements, or to share success factors across the Agency
• Facilitating coordination between PP&C communities
• Providing a common base for communication and data exchange
• Preventing conflict and duplication of effort
• Complying with internal Agency “down and in” (P/p) as well as external “up and out” (Congress,
GAO, etc.) requirements
• Adhering to programmatic requirements or specifications in contracts, grants, and other types
of agreements to ensure contractors are held accountable for delivering the products or services
to achieve P/p needs, goals, and objectives
• Enabling the career growth and development of the Schedule Management community as a
recognized and rigorous career field at NASA
24
and data management (CM/DM) processes, as well as communication aids and tools
throughout the P/p life cycle.
Figure 3-1. The Schedule Management function is composed of five main sub-functions: Schedule Management Planning,
Schedule Development, Schedule Assessment and Analysis, Schedule Maintenance and Control, and Schedule Documentation
and Communication.
The figures in this Chapter define the Schedule Management sub-functions and relate them to
subsequent Chapters within this handbook. Each Chapter further discusses the principles and best
practices associated with effectively implementing each of the sub-functions. Schedule Management
products to be developed are identified throughout this document as italicized and underlined phrases.
25
Figure 3-2. Schedule Management Planning compiles all specifications needed to build the complete set of Schedule
Management tools and processes. Schedule Development is the implementation of those specifications and results in the IMS
and all Schedule Performance Measures. Included in the figure are references to the Sections herein where details are provided.
The left-hand side of Figure 3-2 illustrates the Schedule Management Planning (Chapter 4) sub-function.
This sub-function consists of collecting all the requirements needed to completely plan the development
of the IMS, the assessment and analysis of the schedule, schedule performance measurement and
control, and the documentation and communication of the schedule information. The Schedule
Management Planning sub-function produces the product: Schedule Management Plan, abbreviated as
the SMP, which is the definitive instruction that guides the development, implementation, and
execution of all Schedule Management sub-functions.
26
result from the Schedule Development sub-function (Chapter 5). The following paragraphs briefly
describe these products.
Figure 3-3. Schedule Development is carried out according to the Schedule Development Plan. It requires the collection of data
that affect how the schedule is built, the documentation of the data into a Schedule BoE, along with an appropriate scheduling
tool to develop Schedule Outputs, such as an IMS. Schedule Development also produces Schedule Performance Measures from
which Schedule Performance Reports can be generated.
A Schedule Database consists of the entire database of all schedule data used to develop the IMS and
document the Schedule BoE. It includes all the Activity Attributes, as well as any directly related
supporting documentation such as the Flow Diagrams, Planning Programming Budgeting and Execution
(PPBE) Guidance, etc. The Schedule Database captures the original baseline, as well as any revised
baselines (i.e., “replanning”) or official rebaselines, the current IMS, and also saved copies of each
monthly IMS update. All data products within the Schedule Database are clearly identified and archived
following the version control requirements within the configuration control process.
A Schedule Output is a product of the Schedule Database that is used for various management activities,
such as analysis or control, and is repeated at pre-determined intervals, usually monthly. It may include
at least one or some combination of the following:
27
• An IMS is the complete, end-to-end, time-phased, logically-linked network of all P/p effort that
is required to ensure that all objectives are met within approved commitments. The use of the
word “integrated” implies the incorporation of all activities, even contractor and subcontractor
efforts, necessary to complete the P/p. The IMS is utilized as the P/p management tool that
integrates the planned work, the resources necessary to accomplish that work, and the
associated budget.11 The IMS is the backbone for managing the P/p successfully, which includes
establishing the integrated performance baseline or Performance Measurement Baseline (PMB),
measuring and forecasting performance, controlling the baseline, and communicating the
overall progress against the plan.
• A Summary Schedule is a high-level roll-up of the IMS and is used for management reporting. It
is a direct derivative of the IMS and should mimic the critical path(s) within the IMS.
• When needed for schedule risk analysis, the Schedule Database is the basis for the development
of an Analysis Schedule. An Analysis Schedule should be directly traceable to the IMS, should
replicate the critical paths, and should emulate the IMS; however, it may have additional tasks
to model the impact of discrete risks.
• Schedule Performance Measures are produced by incorporating current performance data with
the planned performance in the IMS. Schedule Performance Reports are typically created
monthly. All are clearly identified and archived following the version control requirements
within the configuration control process. Sections 7.3.2 and 7.3.3 further discuss updating the
schedule with current performance and measuring performance and monitoring trends.
A Schedule Assessment is a planned activity that includes performing a series of checks on the IMS (or
other Schedule Outputs, as appropriate) for compliance to P/p and Agency requirements, compatibility
with NASA best practices and overall schedule integrity. Critical path assessments and margin allocation
along the critical paths are also performed. These schedule assessment checks can be found Section 6.2.
Results of the assessment are documented in a Schedule Assessment Report. It is imperative that all
assessments and results are related to specific Schedule Database outputs and the traceability is
maintained via version control as specified in the Configuration Management (CM)/Document
Management (DM) process.
The Schedule Risk Analysis Model, abbreviated as the SRA Model, is the model used for estimating the
probable future outcome of the P/p’s schedule performance. It appends risk parameters to the Analysis
Schedule output from the Schedule Database. Performing a Schedule Risk Analysis (SRA) as described in
Section 6.3.2 informs management of the adequacy of margin to accommodate expected risk impacts
and helps management to prioritize discrete risk mitigation activities. In cases where the cost-risk is
required, the cost models are incorporated, and the model is referenced as the Integrated
Cost/Schedule Risk Analysis Model, abbreviated as the ICSRA Model. Results from either analysis are
documented in a Schedule Risk Analysis Report. It is imperative that all analyses and results are related
29
to specific Schedule Database outputs and the traceability is maintained via version control as specified
in the P/p’s configuration control process.
Figure 3-5. The Schedule Maintenance and Control sub-function ensures that a schedule baseline is set, routine updates are
made to the Schedule Database, performance measurements are tracked, and corrective actions are taken, if necessary.
30
The hatched portions of the figure illustrate the two input functions to the Schedule Management data
set. They include: (1) the regular monthly performance reports which contain the schedule
performance data that must be input into the Schedule Database, and (2) the P/p management
approved change orders. The change orders generally result from (1) updates to the schedule such as
replanning, detailing a rolling wave, or inclusion of risk mitigation tasks, etc., (2) changes to the P/p
guidelines in the annual government PPBE process, and (3) changes needed to accommodate corrective
actions that result from performance measurements.
The left-hand portion of the figure is intended to show the entirety of the data set. It is critically
important that the data sets be consistent and carefully controlled to ensure (1) consistent
communication of data, and (2) corrective actions and other adjustments are relative to the
corresponding schedule data model.
There are two methods employed to control the schedule. The first is measuring current performance
to plan and when certain prescribed thresholds are crossed, corrective actions are required.
Sometimes, those corrective actions may require adjustments to specific activities such as delaying
them, increasing durations or changing the linkages to other tasks. In those cases, a change order is
issued to change the relevant data in the Schedule Database. Specific data from the performance
reports are used to estimate current schedule performance to plan. There exist specific pre-determined
thresholds that when breached, corrective actions are required. The second method is a forward look at
possible future projections of schedule performance under the influence of risk and uncertainty. There
are also thresholds for this analysis and when exceeded, may also require corrective actions. The lower
right portion of Figure 3-5 illustrates the performance measurement.
Schedule Maintenance and Control are iterative sub-functions that occur continuously throughout the
P/p life cycle. It is imperative for Schedule Maintenance and Control to be performed in conjunction
with Schedule Assessment and Analysis to ensure the integrity of the entire Schedule Management
function. Chapter 7 in this document further defines the Schedule Maintenance and Control sub-
functions.
As illustrated on the left of Figure 3-6, the Schedule Management Planning sub-function draws upon the
P/p’s overall CM/DM plan for the specific requirements needed. The requirements for CM/DM are
shown on the left-hand side of Figure 3-6, and the right part of the figure is a general illustration of a
typical suite of controlled data and documentation. All data and documentation must be consistent,
and where it is important to show pending changes, those must be clearly marked and referenced to
31
actual configuration change documentation. All documentation must consistent and clearly marked,
controlled and archived per the P/p's CM/DM plan. Chapter 8 provides details.
Figure 3-6. The Schedule Documentation and Communication sub-functions detail how the P/p records and disseminates
schedule information among team members, as well as other stakeholders, and helps decision makers determine whether the
P/p’s objectives and commitments are being met. All other Schedule Management sub-functions involve the generation of
schedule information and required products at varying levels of maturity that must be properly documented and communicated
throughout the P/p lifecycle.
Project Manager (PM). The PM’s role in the Schedule Management function is to ensure Schedule
Management principles and best practices are applied in a manner that supports the Agency and
P/p life cycle requirements and objectives. The PM is responsible, with support of the P/p
Planner/Scheduler, other P/p’s PP&C personnel, and Technical Leads, for the schedule development
guidelines, IMS development, baseline plan approval, schedule execution, schedule maintenance,
and baseline plan control. The PM must facilitate the availability and utilization of the necessary
resources, processes, tools, and techniques such that the P/p team can be successful.
Technical Lead (including WBS Element Owner, Control Account Manager (CAM), Integrated
Product Team (IPT) Lead, and/or Product Development Lead). The Technical Lead has the assigned
32
responsibility of accomplishing the work contained in each WBS element to comply with the SMP.
Compliance with the SMP helps to ensure that the deliverables associated with their scope of work
are provided on time. Technical Leads are accountable for the development, execution, and control
of their work scope within the IMS, and therefore need to coordinate with the P/p
Planner/Scheduler to ensure that the schedule reflects accurate information, updated in a timely
manner.
Planner/Scheduler (P/S). The role of the Planner/Scheduler is to implement SMP processes in order
to ensure the P/p’s objectives are successfully achieved. The P/S must be familiar with the P/p
technical scope and be able to translate that information into the network logic model that becomes
the schedule baseline, or baseline IMS. The P/S accomplishes this, in part, by: (1) facilitating
planning through coordination with the P/p team to define P/p requirements and schedule
objectives; (2) developing the IMS; (3) assessing all schedule products to ensure integrity; (4)
performing schedule control by assisting the P/p team in managing changes to the IMS, which
includes baseline change control; and (5) providing insight to the P/p team by reporting schedule
progress, performance, variances, and forecasts. The P/S is also responsible for utilizing P/p
management software tools and techniques to develop, assess, maintain, and control the IMS.
Finally, P/Ss must be able to communicate and coordinate effectively with all members of the P/p
team, be proactive in their approach to problem solving, understand P/p management processes
(e.g., Initiating, Planning, Executing, Reporting, Controlling, and Closing)12, and be able to report
findings to P/p management. While the terms Planner and Scheduler are often interchangeable
insofar as both Planners and Schedulers perform Schedule Development (i.e., scheduling), one
primary distinction is that a Planner may also oversee schedulers who perform many of the
subsystem schedule development, status collection, data input, and report generation duties. The
Planner is typically in a more senior-level career position, having knowledge and experience related
to the integration of multiple programmatic disciplines (e.g., cost, schedule, risk), and is more
involved in determining Schedule Management approaches, performing detailed schedule analysis
and workaround planning to aid in decision-making, and facilitating management-level discussions
related to the P/p schedule.
Schedule Analyst. The role of the Schedule Analyst is to perform analysis that will aid in identifying
deviations from NASA’s Schedule Management best practices and risks that may compromise the
P/p's plan. The Schedule Analyst is responsible for verifying the integrity of the schedule, analyzing
the critical path(s) to determine that all critical activities are being properly tracked, and conducting
schedule risk analysis to understand how risks and uncertainties may alter the likelihood of potential
driving paths to specific milestones and negatively impact the availability of margin. The Schedule
Analyst is also responsible for analyzing the cost, schedule, and risk elements collectively for a
holistic view of the P/p's programmatic health. The Schedule Analyst is responsible for building
integrated models and performing sensitivity analyses that can provide management with insight
into different scenarios for prioritizing resources and margin to address top P/p risks and threats.
Other P/p Team Members (e.g., PP&C personnel). The role of other P/p team members is to
understand the schedule and how it relates to their specific work processes and responsibilities. For
example, the Contracting Officer’s Technical Representative (COTR) coordinates with the P/S to
ensure the contractual deliverables are aligned (e.g., data deliverables, reviews, and hardware and
software deliveries) with the activities and milestones in the schedule. The Business Manager
coordinates with P/S to ensure the budget phasing integrates with the schedule timeline. The Risk
Manager coordinates with the Schedule Analyst to ensure that risks are appropriately mapped to
the schedule, that potential risk impacts are understood, and that approved mitigations have been
incorporated into the planned schedule. In addition, the Cost Analyst coordinates with the Schedule
Analyst to ensure that costs are appropriately mapped to the schedule, such that integrated cost
and schedule risk analysis can be performed. Other P/p personnel who have a specific role in the
P/p Schedule Management function should be identified in the P/p’s SMP.
13 OCFO-SID-0002. NASA Standard Operating Procedure Instruction (SOPI) 6.0. Release Date: May 23, 2017.
https://www.nasa.gov/sites/default/files/atoms/files/sopi_6.0_final.pdf
14 SCoPe website, https://community.max.gov/x/9rjRYg.
34
• Ensuring that Agency policy and guidance documents reflect consistent information
regarding how to meet Schedule Management requirements
• Working with other PP&C disciplines to ensure an integrated approach to advancing the
Schedule Management capabilities through data collection and research
• Working towards the advancement of schedule assessment methodologies and analysis
techniques
• Providing reach-back for P/ps with respect to both in-line and independent Schedule
Assessment expertise
• Providing recommendations for Agency-wide implementation of tools and techniques
• Formulating training consistent with Agency policy and the Schedule Management
Handbook
• Working solutions to specific issues or areas of priority identified by the PP&C Steering
Group
General SCoPe membership consists of NASA Schedule Management practitioners and subject matter
experts (including both civil servant and contractor support) from each of the NASA Centers, the Jet
Propulsion Laboratory, the Applied Physics Laboratory, each of the Headquarters Mission Directorates,
and other Headquarters offices including the OCFO, the Office of Procurement, and the Office of the
Chief Engineer (OCE). Membership is self-selecting.
To ensure strong, consistent Schedule Management expertise is available, it is imperative that the
appropriate training be taken by the P/p team members that are involved in planning, developing,
assessing, analyzing, maintaining, using, controlling, documenting, or communicating P/p schedules.
Selection of training should address the needs and requirements of P/p team’s responsibilities.
Available NASA Schedule Management training courses can currently be found through SATERN17,
APPEL18, and CFO University19, as well as through the NASA’s SCoPe. While formal classroom and self-
taught training is valuable, it is most effective when accompanied by on-the-job training (OJT) and/or
https://satern.nasa.gov/customcontent/splash_page/
18 NASA Academy of Program/Project & Engineering Leadership (APPEL). https://appel.nasa.gov/
19 CFO University (CFOU). https://community.max.gov/pages/viewpage.action?spaceKey=NASA&title=CFO+University
35
hands-on training (HOT), such as working with mentors that have direct NASA Schedule Management
experience. It should be understood that the NASA Schedule Management training curriculum will
cover an evolving, growing list of topics.
SM.P.1 Schedule • A Schedule Management Plan exists, which defines and explains all aspects needed
Management Plan for managing the P/p schedule scope, including:
Exists o Agency, P/p, Organizational, and Environmental goals, objectives, and
assumptions, as well as internal/external stakeholder priorities, scope of work,
roles and responsibilities;
o Establishment of schedule management strategies and processes, that are clear,
concise, and descriptive, including:
▪ Estimating and development scope, methods, tools, and techniques
(including establishment of schedule margin and cost reserves),
▪ Assessment and analysis scope, methods, tools, and techniques,
▪ Maintenance and control scope (including partners, agreements, etc.),
methods, tools, and techniques (including basis for managing scope and
schedule margin, process for managing changes and/or replanning, as well
as descope trigger points identified), and
▪ Documentation and communication/reporting methods (including activity
codes), frequency, tools, and necessary P/p personnel interactions.
SM.P.2 Scheduling • A scheduling methods and techniques are selected appropriate to the type and level
Methods/Approaches of schedule development and management that the P/p necessitates.
are Selected
36
SM.P.3 Schedule • Schedule Management tools are selected appropriate to the type and level of
Management Tools Schedule Management that the P/p necessitates.
are Selected
SM.P.4 Milestone • The milestone registry is defined to include a list of all key dates/milestones and
Registry is Defined associated trigger points (e.g., for descopes or risk mitigations).
SM.P.5 Activity • The activity attributes are identified appropriate to the type and level of schedule
Attributes are development and integrated PP&C management that the P/p necessitates.
Defined
4.2 Prerequisites
The Schedule Management Planning can be initiated when:
1. The Agency and the P/p have concurred on the P/p Commitment Agreement (PCA) and the
Formulation Authorization Document (FAD)
2. The P/p has created or defined the following:
a. P/p scope
b. Descope plan
c. Initial risk list
d. WBS and WBS Dictionary
e. Work packages, control accounts, and product owners
3. The sponsors and/or Agency have defined all external notification and control milestones
4. Interfaces with other P/ps are defined
5. The current FY PPBE guideline document exists
6. The roles and responsibilities for the development of the Schedule Management function are
defined
37
• Agency, P/p, Organizational, and Environmental requirements, goals, objectives, and
assumptions, as well as internal/external stakeholder priorities, scope of work, roles and
responsibilities;
• Establishment of schedule management strategies and processes, that are clear, concise, and
descriptive, including:
• Estimating and development scope, methods, tools, and techniques (including
establishment of schedule margin and cost reserves),
• Assessment and analysis scope, methods, tools, and techniques,
• Maintenance and control scope (including partners, agreements, etc.), methods, tools,
and techniques (including basis for managing scope and schedule margin, process for
managing changes and/or replanning, as well as descope trigger points identified),
and
• Documentation and communication/reporting methods (including activity codes),
frequency, tools, and necessary P/p personnel interactions.
Schedule Planning should be according to the space flight (and R&T) P/p life cycles as follows:
• Pre-Phase A (Pre-Authority to Proceed (ATP)): Identify and assign the P/S and/or Schedule
Analyst. Develop the Milestone Registry. Identify the Activity Attributes. Produce the Schedule
Development Plan portion of the SMP. Make/buy the Schedule Management tools.
• Phase A-SRR (ATP): Complete a preliminary version of the SMP, including the remaining sub-
plans. Deploy the remaining Schedule Management sub-functions. Deploy the documentation
and communication tools.
• Phase A – SDR/MDR: Fully staff and begin full-up execution of all processes. Update the SMP.
• Phase B – PDR (P/p Approval): Schedule Management is fully operational. Baseline the SMP.
• Phase C/D – CDR/SIR/ORR through Launch (PARs through Closeout): Continue to implement the
SMP and update as necessary.
Schedule Management Planning is performed throughout the P/p life cycle, but has greatest emphasis
during Phase A. In pre-Phase A (Concept Studies), there is a lack of knowledge and understanding of the
technology, as well as immature mission/system requirements. Information is still being gathered and
Schedule Management Planning is performed at a very high level. As shown in Figure 4-2, Schedule
Management Planning takes into consideration requirements derived from several sources including,
but not limited to, Agency, P/p, organizational, internal and external requirements for schedule
management, performance and reporting. In addition, as tools are selected to support the execution of
the Schedule Management function, ancillary requirements are derived. By Phase A (Concept &
Technology Development), the mission/system concept definition is completed, most concept and trade
studies are completed, preliminary requirements are established, and a preliminary P/p Plan is
developed. Therefore, P/p definition becomes clear enough during Phase A to allow for a more discrete
breakdown of work tasks and milestones. The SMP is prepared during Phase A of Project Formulation
38
prior to Key Decision Point (KDP) I for most Programs and KDP B for Single-project Programs and
projects.
Figure 4-2. Components of the Schedule Management Plan (SMP) and sources of requirements.
The SMP can be a stand-alone plan or a subsidiary component of the P/p Plan. Regardless of how it is
structured within a P/p’s documentation, the SMP should be subject to document control. The SMP is
not intended as a detailed procedure for performing “scheduling;” rather, it is a guideline for applying
principles, processes, and best practices.
The SMP contains four sub-plans: (1) a Schedule Development Plan with Milestone Registry and Activity
Attributes as subsections, (2) a Schedule Analysis and Assessment Plan, (3) a Schedule Maintenance and
Control Plan, and (4) a Schedule Documentation and Communication Plan. The SMP defines the
execution of all the sub-plans, which address development, deployment, and execution of the Schedule
Management function. The Maintenance and Control Section of the SMP supports the development
39
and implementation of the Technical, Cost, and Schedule Control Plan required in NPR 7120.5. A table
of the required SMP maturity for NPR 7120.5 P/ps at given LCRs is provided in Figure 4-3.20
Formulation Implementation
KDP 0 KDP I KDP n
Loosely-Coupled
Uncoupled and
for
managing
schedule
(and cost)
during
Phase A
Figure 4-3. SMP maturity requirements by P/p phase according to NPR 7120.5.
Although not explicitly identified in NPR 7120.8 as a required product for R&T P/ps, it is expected that
the SMP would exist at a similar maturity for corresponding life cycle phases. It is also important to note
that, “R&T projects that directly tie to the space flight mission’s success and schedule are normally
managed under NPR 7120.5.” Figure 2-3 and Figure 2-4 provide an overview of the expected maturity of
the SMP for P/ps that adhere to NPR 7120.8.
Through careful development and execution of the SMP, the P/p is able implement all the elements of
the Schedule Management function, including: generate a cost and schedule estimate, time phase the
schedule development to match the maturation of the P/p, estimate the resources needed to execute
the function, and assign roles and responsibilities related to managing the P/p schedule. An SMP
Annotated Template can be found on the SCoPe website.21
20 While NPR 7120.8A does not specifically require a Basis of Estimate (BoE) product, projects that adhere to NPR 7120.8 may
benefit from the documentation of BoEs to support the required programmatic products.
21 SCoPe website, https://community.max.gov/x/9rjRYg
40
planned in accordance and integrated with the institutional EVM processes and methodologies on P/ps.
Guidance for aligning the development of the schedule to support EVM can be found on the SCoPe
website.22
In creating the Schedule Development Plan, all requirements needed to drive the development of the
IMS are collected. As shown in Figure 4-4, requirements for Schedule Assessment come from NPRs,
internal and external guidance documents, and the P/p planning documents. This includes the
requirements for the selection of the schedule management tools and make/buy plans, the
development of the Schedule Database, the data collection procedures necessary for populating the
Schedule Database, and the development of the Schedule Outputs. The result is a fully operational
capability to schedule work, capture performance, and generate outputs. All of these things are
included in the Schedule Development Plan which may be a standalone plan or a part of the SMP. Best
practices for Schedule Development are captured in Chapter 5 and should be considered for
incorporation in the Schedule Development Plan.
Figure 4-4. Requirements for the Schedule Development sub-function come from the NPRs, guidebooks, and P/p planning, as
well as external requirements and guidance.
4.3.1.1 Collect the Top-Level Requirements for the Construction of the IMS
The IMS requirements are derived from several sources, the P/p-specific needs, the Agency
requirements and guidelines, P/p external requirements, and the P/p’s parent organization’s
Figure 4-5. The Schedule Development Plan top-level requirements are derived from several sources: the P/p’s specific needs,
the Agency requirements and guidelines, P/p’s external requirements, and the P/p's parent organization’s requirements.
42
4.3.1.1.2 Agency Requirements
Agency documents, such as NPR 7120.5 and NPR 7123.1, require specific documents to establish P/p
commitments, plans for schedule management, and products needed by phases of development. These
include:
• P/p Formulation Authorization Document (FAD) – The FAD is authorized by NASA Headquarters
as the formal initiation of formulation. It identifies the resources, scope of work, period of
performance, goals, and objectives for the formulation process.
• P/p Commitment Agreement (PCA) – The PCA is the agreement between NASA Headquarters
and the PMs that documents the Agency's commitment to implement the P/p requirements
within established constraints. It identifies key P/p milestones for the implementation process.
• P/p Formulation Agreement (PFA) – The PFA is the single-project Program’s or project’s
response to the FAD. It serves as a tool for communicating and negotiating the P/p’s
formulation plans and resource allocations with the Program and Mission Directorate.
• P/p Plan – The P/p Plan is an agreement between NASA Headquarters, the Center Director, and
the PMs that further defines the PCA requirements and establishes the plan for P/p
implementation. It identifies additional key P/p milestones and lower level schedules and
establishes the P/p strategy for schedule development, maintenance, and control.
• P/p Budget – P/ps need to ensure that the budget plan and the schedule plan adequately
correlate. This requires a joint effort and good communication between the Resource
Management function and the P/S early in the P/p life cycle. It is a recommended practice that
budget and schedule planning be done at a level of detail that will provide sufficient
management insight, control, and the ability to accurately measure and track progress. Budget
and schedule planning and development should be carried out by both the Resource Manager
and the P/S in a manner that accurately correlates the time phasing of both products. This
collaborative approach aids in ensuring that the necessary consistency exists between the two
plans. Because this is typically a manual effort, a disciplined process should be established and
documented.
• P/p Technical, Schedule, Cost (TSC) Control Plan – The TSC Control plan documents how the P/p
plans to control requirements, technical design, schedule, and cost to achieve the program
requirements on the P/p. The plan describes how the P/p monitors and controls the
requirements, technical design, schedule, and cost to ensure that the high-level requirements
levied on the P/p are met. It describes the P/p’s technical, cost, and Schedule Performance
Measures in objective, quantifiable, and measurable terms and documents how the measures
are traced from the program requirements on the P/p. In addition, it documents the minimum
mission success criteria associated with the program requirements on the P/p that, if not met,
trigger consideration of a Termination Review. The minimum success criteria are generally
defined by the P/p’s threshold science requirements. The P/p also develops and maintains the
status of a set of programmatic and technical leading indicators. While certain technical
indicators are required, the Agency also highly recommends the use of a common set of
43
programmatic indicators to support trend analysis throughout the life cycle.23 The schedule
control portion of this plan should reflect what is captured in the SMP.
More abstract, but likely more important are the stakeholder priorities, which may have schedule
implications as well as schedule reporting requirements. From these sources, typical requirements are
usually control milestones which are loaded into the Milestone Registry, and specific reporting forms
and formats such as those needed to show performance to schedule, which are further defined in
Chapters 7 and 8. After the PDR, there are specific milestones in the Management Agreement (MA)
document and the Agency Baseline Commitment (ABC) document that must be in the Schedule
Database and regular reports of performance to those milestones are required. On an annual basis, the
PPBE process will issue a guideline document to the P/p, which contains scheduling requirements that
need to be met in concert with an annual funding profile.
23NASA/SP-2014-3705, NASA Space Flight Program and Project Management Handbook. Pages 157-158.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150000400.pdf
44
4.3.1.2 Collect the Ancillary and Derivative Requirements for Schedule Development
Within the Schedule Development sub-function, there are ancillary requirements and derivative
requirements. While the sections above describe the sources of imposed requirements on the Schedule
Management function, this section of the SMP identifies the requirements derived internal to the
Schedule Management function. The sources of these requirements are shown in Figure 4-6.
Figure 4-6. There are ancillary and derivative requirements that are not specifically levied on the Schedule Management
function but must be determined from other sources in order to completely fulfill the needs of the Schedule Management
function.
CPM scheduling should be used, when practical. In conjunction with CPM, a P/p may choose to employ
Rolling Wave Planning, which uses progressive elaboration, is a method that allows for scheduling to
occur in waves, adding a greater level of detail as the P/p evolves. The P/S should also consider different
45
IMS development techniques that acknowledge the differences among NASA’s P/p types, acquisition
strategies, external partnering agreements, and other factors with respect to the integration of schedule
data. These methods and techniques are described in Section 5.5.3 and Section 5.6.1.
Schedule Assessment Tools. Schedule health check tools are needed. Some are automated and
work well as add-ons. An example is the NASA STAT tool, an MS Project add-on. Other tools are
standalone and require the IMS to be uploaded to the tool, such as Deltek’s Acumen Fuse. Still
other assessment tools may be check lists developed from best practices such as those included in
the GAO Schedule Assessment Guide.24 The assessment tools need to be specified and a make or
buy plan developed.
Schedule Risk Analysis Tools. Schedule Risk Analysis (SRA) tools are sometimes integral with the
scheduling software and sometimes add-on macros. Examples of add-ons are JACS, Polaris,
Primavera Risk Analysis, @Risk, and Full Monte. If using an add-on approach, the selection must be
made in parallel with the selection of a scheduling tool to ensure compatibility.
Specialty Tools. All scheduling software can output useful reports, but they don’t always output the
specific report format needed. It is best to work with the P/p management team, pose typical
decision-making reports, and select those that best suit their needs. Once that is done, the creative
process on how to generate those reports begins. There may be a need for specific data exports
from the scheduling tool or the SRA tool. Those export formats need to be defined. There may be a
need to make or buy performance measurement plotting tools. Examples of plots needed for
performance measurement are Baseline Execution Index (BEI), Schedule Performance Index (SPI),
Tools are addressed in the chapters corresponding to Schedule Management functions that they
support. A list of tools commonly used throughout the Schedule Management life cycle can be found on
the SCoPe website.25
Milestone Registry
Project: Lunar Polar Explorer
Date: 10/1/2015
Version: MR.001 Rev C
ID Name Owner Date Type*
•
n
*Type: C, Control, N, Notification
The Milestone Registry may also be used to identify major P/p events (e.g., LCRs, KDPs), contractual or
acquisition events (e.g., procurements, hardware deliveries), or other interfaces or programmatic
milestones. As such, the Milestone Registry is helpful in communication with other PP&C functions. For
instance, initial acquisition milestones are provided to the Acquisition and Contract Management
function for use in developing solicitations. The Milestone Registry may ultimately be maintained in the
IMS, as shown in Figure 4-8.
25Schedule Management tools are referenced in the Agency Schedule Management Tool Matrix located at the SCoPe website,
https://community.max.gov/x/9rjRYg.
47
Figure 4-8. The figure shows an example of a generic Milestone Registry maintained within the IMS and filtered using field
codes.
48
Figure 4-9. All necessary Activity Attributes must be identified in order to allocate activity data to fields in the IMS.
Cost interface attributes are those that relate the activity to the cost estimates (e.g., Resource Name,
Resource Type, Resource Uncertainty Distribution and parameters, etc.). Risk Management Interface
attributes are those that relate to the Risk Management Process (e.g., risk likelihood and impact
probability distribution). P/p-defined attributes are special attributes that the P/p may assign such as
the name of a contract, a reference field to a sub-project schedule, a type field for different categories
of P/p elements or flags for sorting. Intrinsic attributes are those specific to the activity itself (e.g., WBS,
OBS, CBS/CAM, uncertainty distribution, etc.). Most scheduling software has default fields for many of
the intrinsic attributes.
Establishing all of the Activity Attributes during Schedule Management Planning facilitates the
construction of the Schedule BoE by defining the data required for the activities. For example, activity
uncertainty parameters need to be captured in the Schedule BoE and should therefore be defined as
part of the Activity Attributes.
49
construction of the IMS must be compliant with the Milestone Registry, and routine reporting as well as
risk assessment reporting should address the milestones.
After collection of the requirements for the attributes for the activities, construct the table of Activity
Attributes for inclusion in the Schedule Development Plan as an appendix. The table of Activity
Attributes is used to develop the IMS by defining the required fields for the Schedule Database.
At this point in the planning process, most specifications are available for building the IMS. A schedule
tool has been selected for logically linking the activities and exporting and displaying the schedule
information. A make/buy plan is available, and a schedule of development and deployment activities is
available. An estimate of the resources and skill level needed for development of the IMS should be
included in this section of the SMP. Resources for maintenance should not be included in the
development section of the SMP, as they are included in the section on Schedule Maintenance.
In creating the Schedule Assessment and Analysis Plan, all requirements needed to define the processes
for Assessment and Analysis are collected. The requirements collected, and Activity Attributes
previously identified in the Schedule Development Plan will also aid in the implementation of the
Schedule Assessment and Analysis sub-functions. Best practices for Schedule Assessment and Schedule
Analysis are captured in Chapter 6 and should be considered for incorporation in the Schedule
Assessment and Analysis Plan.
50
Figure 4-10. Requirements for the Schedule Assessment sub-function come from the NPRs, guidebooks, and P/p planning.
• Critical/Driving Path and Structural Check. Assesses the structural quality & fidelity of all
possible critical paths and driving paths and compliance with horizontal tractability standards.
Depends upon a satisfactory Health Check.
• Basis Check. Assesses the justification of each discrete schedule element, including risks.
Depends, in part, upon a satisfactory Risk ID & Mapping Check.
51
• Resource Integration Check. Affirms that P/p’s budget, workforce, and cost estimates at any
point in the P/p life cycle map to the corresponding IMS.
When planning the tools and techniques for the quality assessment of the schedule, they must be able
to support the Schedule Assessment sub-function per P/p requirements. These may include forms,
formats, check lists, and a make/buy plan for software tools.
Figure 4-11. Requirements for the Schedule Analysis sub-function come from the NPRs, guidebooks, and P/p planning.
NPR 7120.5 requires risk-informed schedule completion range estimates at P/p milestones as early as
MCR. This requirement can be met through use of an SRA to establish the expected range of completion
dates according to P/p-identified confidence levels. NPR 7120.5 also requires a JCL estimate at KDP C.
The JCL requirement necessitates the use of a cost-loaded schedule with both cost and schedule risks
52
and uncertainties loaded into an Integrated Cost and Schedule Risk Analysis Model (ICSRA Model). NPR
7123.1 specifies a success criterion of ensuring that cost and schedule commitments can be met with
acceptable risk at subsequent milestones.
In addition, the P/p may levy requirements to have an SRA or an ICSRA performed for the following
reasons:
• As support to the establishment of the baseline
• At specified milestones in preparation for LCRs
• At regular intervals for tracking risk-based estimates-at-completion
• As specified to support risk mitigation planning
• As support to development of the schedule to ensure sufficient schedule margin
The above requirements will specify the type of analyses and the trigger(s) and frequency of those
analyses. Other requirements are needed to specify the analysis tool and the expected outputs. Some
scheduling software include a schedule risk analysis package or can accommodate schedule risk analysis
add-ons. Examples of schedule risk analysis tools are JACS, Polaris, Primavera Risk Analysis, @Risk, and
Full Monte.
Whether the complete IMS or an Analysis Schedule is used as the basis for the SRA or the ICSRA, it will
need to interface with the risk management data and the cost data.26 Techniques for integration of
schedule, cost, and risk must be specified as a part of the planning process. This needs to be done to
determine whether there are additional requirements to make/buy application software to link the
databases within the tools. This collection of requirements is used to develop the analysis capability and
facilitate the allocation of resources to execute the analyses.
The expected outputs of the SRA or the ICSRA need to be defined, and tools to process the data and
create the reports will need to be specified. Then a make/buy plan needs to be created and included as
a part of the SMP. Examples of outputs that need to be considered are:
• Confidence level curves and data tables
• Probability density functions (PDFs) and data tables
• Scatterplots and data tables
• Risk and task sensitivity indicators such as tornado charts
• Risk trends over time
26The NASA Project Planning and Control (PP&C) Handbook discusses the interfaces between the Schedule Management
Function and other PP&C functions. See SP-2016-3424. September 16, 2016.
53
Upon completion of this task, all information is available for development of the analysis capability and
the reporting formats, scheduling the analyses (e.g., frequency), and allocating resources for the
analysis.
54
Figure 4-12. The P/p must define the requirements for Schedule Maintenance to update the schedule. A typical process is
shown here.
55
Figure 4-13. The P/p must define the requirements for Schedule Control. A typical process is shown here.
56
It is important to note that any integrated, technical/schedule/cost control aspects are required to be
included in a Technical, Schedule, and Cost (TSC) Control Plan per NPR 7120.5. The Schedule Control
section of the SMP should support and be consistent with the content in the TSC Control Plan. The NPR
should be referenced for the complete set of requirements for the TSC Control Plan, which includes
documenting how the P/p plans to control requirements, technical design, schedule, and cost to achieve
its high-level requirements. The TSC Control Plan will:
• Describe the plan to monitor and control the requirements, technical design, schedule, and cost
of the P/p.
• Describe the P/p's Schedule Performance Measures in objective, quantifiable, and measurable
terms and document how the measures are traced from the program high-level requirements.
• Establish baseline and threshold values for the performance metrics to be achieved at each Key
Decision Point (KDP), as appropriate. In addition, document the mission success criteria
associated with the P/p-level requirements that, if not met, trigger consideration of a
Termination Review.
• Develop and maintain the status of a set of programmatic and technical leading indicators to
ensure proper progress and management of the P/p. These include:
o Requirement Trends (percent growth, to-be-determined/to-be-resolved (TBD/TBR)
closures, number of requirement changes)
o Interface Trends (percent Interface Control Document (ICD) approval, TBD/TBR
burndown, number of interface requirement changes)
o Verification Trends (closure burndown, number of deviations/waivers approved/open)
o Review Trends (Review Item Discrepancy (RID)/Request for Action (RFA)/Action Item
burndown per review)
o Software Unique Trends (number of requirements per build/release versus plan)
o Problem Report/Discrepancy Report Trends (number open, number closed)
o Cost Trends (Plan, actual, UFE, EVM, NOA)
o Schedule Trends (critical path slack/float, critical milestone dates)
o Staffing Trends (Full-time equivalent (FTE)/work year equivalent (WYE)
o Technical Performance Measures (Mass margin, power margin)
o Additional P/p-specific indicators, as needed
• Describe the approach to monitor and control the P/p's ABC. Describe how the P/p will
periodically report performance. Describe mitigation approach if the P/p is exceeding the
development cost documented in the ABC to enable corrective action prior to triggering the 30
percent breach threshold. Describe how the P/p will support a baseline review in the event the
Decision Authority (DA) directs one.
57
• For loosely coupled or uncoupled programs, describe the EVM requirements flowed down to the
projects. For tightly coupled programs, single-project programs, and projects, describe the
EVMS. Include references to the EVM Implementation Plan, a control plan that may be stand-
alone or included as part of the P/p Plan, which details the P/p’s processes for establishing,
monitoring, and controlling the IMS and utilizing the technical and schedule margins and UFE to
meet the management and commitment baselines, as well as the methods the P/p will use to
communicate changes for the schedule.
• Describe any additional specific tools the P/p will use to implement the control processes (e.g.,
the requirements management system, the information management systems, Integrated P/p
Management Reports (IPMR), etc.).
• Describe how the P/p will monitor and control the IMS, including any replanning techniques
available.
• Describe how the P/p will utilize its technical and schedule margins and Unallocated Future
Expense (UFE) to control the Management Agreement and external commitment baselines.
• Describe how the P/p plans to report technical, schedule, and cost status to the MDAA,
including frequency and the level of detail.
• Describe how the P/p will address technical waivers and deviations and how dissenting opinions
will be handled.
4.3.4.1 Collect the Requirements for the Schedule Documentation and Communication Plan
The requirements for the CM/DM of the schedule management plans, processes, and products are
derived from the P/p overall CM/DM plan. The data and products to be managed are identified in
Figure 4-14 below.
58
Figure 4-14. The documentation and data to be managed by the CM/DM process is identified here.
The documentation and data to be managed and controlled consist of at least, but not limited to,
informal backups and formal archives of the following:
• SMP
• Schedule Database, including its data inputs and the Schedule BoE
• Schedule Outputs, including but not limited to, the IMS, Summary Schedule, and Analysis
Schedule
• Schedule Assessment Outputs/Report and Schedule Risk Analysis Outputs/Report
• Schedule Performance Reports and Corrective Actions/Baseline Change Requests
• Lessons Learned
All data products must be clearly coded for consistency. For example, a performance report must be
clearly tagged to a specific output from the Schedule Database, which is in-turn tagged to the current
performance report that was used for the update. Another example is the different levels of Gantt
charts; detail for internal management and high-level for external review. They must all be consistent
and coded such that the relationship is clear.
All interested parties will specify the report types needed. Those requirements need to be collected and
used to specify the processes and tools needed to generate the required data for those reports. This
59
will further lead to the definition of forms and formats usually defined in Data Requirements Documents
(DRDs).27 Examples of report types can be found in Section 8.3.2.4.
Communication of schedule information varies with the audience. The audience must be identified and
the appropriate information products, message type, delivery, and schedule for distribution defined to
support internal P/p reviews, LCRs, and KDPs.
5 Schedule Development
It is a best practice for the schedule to be developed in accordance with the Schedule Management
Plan. Per Section 4.3, the Schedule Management Planning sub-function produces the Schedule
Management Plan, which provides instructions for Schedule Development - to guide the development
of the IMS and associated schedule products. Specifically, the Schedule Development Plan, the first of
four sub-plans in the SMP, includes the definition of the tools and techniques appropriate to the
type/level of scheduling the P/p necessitates. The objective of the Schedule Development process is to
define, develop and deploy a scheduling capability, including the capability to display the P/p’s time-
phased activities in an IMS and to export specific outputs as required for the other Schedule
Management sub-functions as described in Chapters 6, 7, and 8. When complete, the P/p will have a
Schedule Database contained within a scheduling tool that has the capability to generate Schedule
Outputs, including an IMS with all the required Schedule Performance Measures and associated Schedule
Performance Reports, a Summary Schedule for management reporting, and an Analysis Schedule to be
27 NASA has standard procedures in place to support the early development and documentation of operational concepts during
system development. This includes Data Requirements Document (DRDs), which describe the format and content of the
information to be provided, as well as the Data Requirements List (DRL), which set forth the data requirements in each DRD.
https://www.nasa.gov/sites/default/files/files/NNK14MA74C-Attachment-J-02-Data-Requirement-Deliverables(1).pdf
28 The Integrated Program Management Report (IPMR) Data Requirements Document (DRD) Implementation Guide discusses
different options for tailoring the Data Item Description (DID), which describes overarching requirements. The IPMR is a
consolidation of the Contract Performance Report (CPR) and the IMS and is required on all new contracts when an EVMS is a
requirement. The IMS is Format 6 of the IPMR. See the NASA IPMR DRD Implementation Guide for preparation of the IPMR
DRD, https://evm.nasa.gov/reports.html. Appendix D of the NASA Earned Value Management (EVM) Implementation
Handbook provides guidance for the CPR DRD, https://evm.nasa.gov/handbooks.html. CPR Format 5 for IMS analysis can be
found at https://evm.nasa.gov/reports.html.
29 SCoPe website, https://community.max.gov/x/9rjRYg
60
used for the SRA/ICSRA. The Schedule Development sub-function will culminate with the creation of the
IMS, which supports the requirement to baseline the IMS.
SM.D.1 Schedule • The schedule is developed in accordance with the Schedule Management
Development Follows the SMP Plan.
SM.D.2 Schedule BoE • The Schedule Basis of Estimate (BoE) is created and maintained throughout
Provides Rationale for all the P/p’s life cycle that documents basis rationale for all elements of the
Elements of the Schedule planned schedule, assessment and analysis findings, reporting artifacts, and
primary source data, documents and other pertinent information.
SM.D.3 Schedule is Developed • The schedule is developed using tools appropriate to the type and level of
Using Appropriate Tools schedule management that the P/p requires.
SM.D.4 Schedule Activities are • The schedule is coded such that it facilitates P/p management support
Coded to Facilitate P/p processes and other programmatic functions.
Management Support
Processes/Functions
SM.D.5 Schedule is Developed • The schedule is developed using Critical Path Method Scheduling.
Using Appropriate Scheduling
Methods
SM.D.6 Schedule is Tiered • Schedule activities are collected as organized in the WBS and tiered
According to WBS according to the lower-level, related WBS items.
SM.D.7 Schedule Naming • A schedule activity naming convention is established that allows for clear,
Convention is Established concise, and differentiable activities.
SM.D.8 Schedule Activities • Schedule activities capture all approved work scope, such that all work can
Capture All Work Scope Down be allocated to complete the WBS elements in an integrated manner.
to the Work Package Level
SM.D.9 Schedule is Developed • Schedule is developed to the lowest level of detail appropriate, typically the
to Lowest Appropriate Level of work package level, as early in the P/p life cycle as possible.
Detail
SM.D.10 Schedule Activities • Schedule activities demonstrate horizontal traceability, such that they are
Demonstrate Horizontal logically sequenced using proper relationship types that account for the
Traceability interdependence of all activities and milestones.
SM.D.12 Schedule Activities • Schedule activities only use lead and lag relationships when the values
Use Minimal Lead and Lag represent real situations of needed acceleration or delay time between
Relationships activities.
SM.D.13 Schedule Activities • Schedule logic limits the use of constraints other than “As Soon As
Limit the Use of Constraints Possible” to situations that represent actual work flow.
61
SM.D.14 Schedule Activities • All activity durations are scheduled according to the same time units.
are Scheduled According to
the Same Time Units
SM.D.15 Schedule Activities • Activities are scheduled according to representative calendars that
are Represented According to appropriately distinguish between working and non-working days.
Appropriate Calendars
SM.D.16 Schedule Activity • Schedule activity durations, including associated duration uncertainties, are
Durations are Estimated Using derived based on sources and/or processes that are appropriate and
Appropriate Sources / provide the best justification for their estimation.
Processes
SM.D.17 Adequate Schedule • Adequate margin is established and allocated as part of the schedule
Margin is Identified as Part of baseline and is clearly identifiable.
the Schedule Baseline
SM.D.18 Cost and/or • The schedule includes costs and/or resources assigned to all applicable
Resources are Assigned to activities at the most appropriate WBS level.
Schedule Activities
SM.D.19 Schedule is Time • The schedule is time-phased to align with the availability of funding to
Phased provide the earliest possible finish date.
SM.D.20 Discrete Risks are • Discrete risks are quantified and mapped to appropriate activities within
Mapped to the Schedule schedule.
SM.D.21 All Schedule • The integrated master schedule (IMS) is the foundation for all schedule
Products Tie to the IMS information.
SM.D.22 Schedule • The schedule reflects vertical traceability in that any and all supporting
Demonstrates Vertical schedules contain consistent information and can be traced to the IMS.
Traceability to the IMS at all
Levels
5.2 Prerequisites
The Schedule Development can be initiated when:
• P/p Plans, including domain-related plans
• SMP, including schedule guidance and ground rules & assumptions (GR&As)
• Other P/p GR&A documents
• The SMP sub-plan, Schedule Development Plan, which specifies the requirements,
implementation approach, and timeline for developing the IMS, is available
• The Milestone Registry is available
• The table of Activity Attributes is available
• A scheduling tool has been selected to facilitate the maintenance, documentation and control of
the IMS
62
The Schedule BoE is typically documented in conjunction with the development of the IMS, with
preliminary and baseline versions established when required and subsequent updates throughout the
P/p life cycle, as necessary. The following sections guide the P/S through the Schedule Development
process.
A clear understanding of the work content is necessary before a valid schedule can be developed. The
P/p work scope may be captured in the P/p Plan or in a collection of other P/p documents (e.g.,
Acquisition Plan, Verification Plan, Request for Proposal, Statement of Work (SOW)/contracts, other
external agreements, including international partnership agreement, including MOUs, MOAs, etc.). P/p
scope may include information gleaned from mission concepts, trade studies, system requirements, test
and verification requirements, safety requirements, hardware and software specifications, system
design, interface design, tooling requirements/design, manufacturing standards, unique P/p ground
rules and assumptions (GR&As), known risks, etc. These inputs should be clearly articulated by the
technical team and incorporated into the WBS and WBS Dictionary. The WBS will cover all work
elements identified in the approved P/p scope of work, including both in-house and contracted efforts.
A trace between P/p Plans, agreements, and other P/p documentation helps to ensure that all work is
captured in the WBS.
Since a WBS plays such a critical role in organizing and managing a P/p, it is important to know what
attributes are involved in a sound WBS document. Listed below are several key characteristics generally
found in a complete and meaningful P/p WBS document:
• Predominantly product-oriented
• Uses correct standard level two WBS template (from NPR 7120.5 and NPR 7120.8)
• Sub-divided elements are logical, hierarchical, and easy to understand
63
• Consistent with NASA Structure Management (NSM) coding
• Includes total P/p scope of work (including contractor effort)
• Allows for work summarization at each level
• Subdivision of work (hierarchy) is aligned with system architecture (e.g., system, subsystem,
component)
• Reflects element integration and relationships
A good WBS defines the effort in measurable elements that provide the means for integrating and
assessing technical, schedule, and cost performance. Care should be taken to validate that the total P/p
scope of work is included in the WBS prior to establishing the schedule baseline. If work is not included
in the WBS/WBS Dictionary that has been approved by P/p management, then it should not be included
within the IMS. The structure and format of the schedule should closely correlate to the approved WBS
to ensure traceability and consistency in reporting. This is accomplished by including within the IMS the
correct WBS code that is associated with each schedule task for all applicable elements, such as
hardware, software, test facilities, logistical subsystems, subcontracts, international contributions, and
support systems. Task definition begins with the product-oriented WBS, extending and detailing the
WBS down to discrete and measurable tasks.
In addition to providing a framework for planning, the WBS becomes very important to the P/S by
allowing various reporting data to be selected, sorted, and summarized to meet the analysis and
forecasting needs of P/p management and to aid in Schedule (and cost) Control. For P/ps with
contractor support, the contractors are typically required to extend approved Contractor WBS (CWBS)
elements to the necessary level of detail. It should be noted that while the Agency Core Financial
System is currently limited to seven WBS levels for capturing actual P/p costs, a P/p’s technical WBS and
schedule can further extend to lower levels to ensure that work definition and progress insight is
sufficient for proper management. NPR 7120.5 and NPR 7120.8 outline WBS structures for space flight
programs and research and technology programs, respectively, and should be used as guidance on
creating WBSs for these types of P/ps. The NASA WBS Handbook provides additional examples that can
be tailored for most P/ps. 30 Starting with the approved WBS will not only help ensure that the total
scope of work is included in the schedule, but also will ensure consistency in the integration of cost and
schedule data.
Figure 5-2 provides an example of a product-oriented WBS with recommended development guidance
highlighted.
64
Figure 5-2. Product-oriented Work Breakdown Structure (WBS) example.
31 http://acqnotes.com/acqnote/careerfields/integrated-master-plan
65
IMP Level Activity # Task Name WBS Reference
Event A Event A - Integrated Baseline Review/PDR (IBR/PDR) -
Accomplishment A01 Management Planning Reviewed -
Criteria A01a Program Organization Established 1.2.1
Criteria A01b Initial Configuration Management Planning Completed 1.2.2
Criteria A01c Initial Integrated Master Schedule Reviewed 1.2.1
Criteria A01d Risk Management Plan Reviewed 1.2.1
Accomplishment A02 Baseline Design Reviewed -
Criteria A02a Requirements Baseline Completed 1.3.1
Criteria A02b Review of Existing Baseline Engineering Drawings Completed 1.1.1
Accomplishment A03 IBR Conducted -
Criteria A03a IBR Meeting Conducted 1.2.1
Criteria A03b IBR Minutes and Action Items Generated 1.2.1
Accomplishment A04 PDR Conducted -
Criteria A03c PDR Meeting Conducted 1.3.2
Criteria A03d PDR Minutes and Action Items Generated 1.3.2
Event B Event B - Critical Design Review (CDR) -
Accomplishment B01 Design Definition Completed -
Criteria B01a Design Deltas to Baseline Identified 1.1.1, 1.3.1
Criteria B01b Drawings Completed (Baseline & Delta) 1.3.1
Accomplishment B02 System Performance Assessment -
Criteria B02a Updated Drawings and Specifications Reviewed 1.3.1
Criteria B02b Analysis Results Reviewed 1.3.2
Criteria B02c Test Results Reviewed 1.3.2
Accomplishment B03 Mission Performance Predictions Reviewed -
Criteria B03a Analysis Results Reviewed 1.3.2
Criteria B03b Test Results Reviewed 1.3.2
Accomplishment B04 Payload Integration Plan Reviewed -
Criteria B04a Payload Design Reviewed at the System Level 1.3.3
Criteria B04b Payload Testing (Component, Functional, Static) Reviewed 1.3.5
Criteria B04c Safety and Failure Analysis Reviewed 1.3.2
Accomplishment B05 Initial Test Plan Reviewed -
Criteria B05a Initial Test Schedule Reviewed 1.2.1
Criteria B05b Initial Test Requirements Reviewed 1.2.2
Accomplishment B06 Critical Design Review (CDR) Conducted -
Criteria B06a IBR/PDR Minutes and Action Item Closure Plan Finalized 1.2.2
Criteria B06b CDR Meeting Conducted 1.3.2
Criteria B06c CDR Minutes and Actions Genearated 1.3.2
The IMP provides a PM with a systematic approach to planning, scheduling and execution.32 Both the
IMP and the IMS form the foundations for the implementation of the EVMS. The IMP should provide
sufficient definition to allow for tracking the completion of required accomplishments for each event
and to demonstrate satisfaction of the completion criteria for each accomplishment. In addition, the
IMP demonstrates the maturation of the development of the product as it progresses through a
32The Integrated Program Management Report (IPMR) Data Requirements Document (DRD) Implementation Guide states,
“IPMR DRD shall be integrated with the Contract Work Breakdown Structure (CWBS), the Integrated Master Plan (IMP) if
applicable, Integrated Master Schedule (IMS), Risk Management Processes, Plans and Reports (where required), Probabilistic
Risk Assessment Processes and Reports (where required), the Cost Analysis Data Requirement (CADRe) and the
Monthly/Quarterly Contractor Financial Management Reports (533M/Q).” See the NASA IPMR DRD Implementation Guide for
preparation of the IPMR DRD, https://evm.nasa.gov/reports.html.
66
disciplined systems engineering process. The IMP events are not tied to calendar dates; each event is
completed when its supporting accomplishments are completed and when this is evidenced by the
satisfaction of the criteria supporting each of those accomplishments. The IMP is generally contractually
binding and becomes the baseline execution plan for the P/p. Although fairly detailed, the IMP is a
relatively top-level document in comparison with the IMS. The IMS relates to the IMP in that it shows all
the detailed tasks required to accomplish the work effort contained in the IMP in a time-based network
of activities. Thus, the IMP outline code should be traceable to both the WBS and the IMS with all tasks
containing an appropriate IMP assignment, if/when applicable.
The IMP and IMS are valuable tools a PM can use in preparing for a Request for Proposal (RFP) and
Source Selection because they serve as the basis of an offeror’s proposal and evaluation criteria. The
IMP and IMS should clearly demonstrate that the P/p is structured and executable within schedule and
cost constraints and with an acceptable level of risk. Thus, both the IMP and IMS are key ingredients in
P/p planning, proposal evaluation, source selection, and program execution.33 However, the IMP is not
a NASA-required product.
33Department of Defense. Integrated Master Plan and Integrated Master Schedule Preparation and User Guide. Version 0.9.
October 21, 2005. http://acqnotes.com/acqnote/careerfields/integrated-master-plan.
67
The OBS also identifies the resources available to assign to work activities and to resource load the
schedule. When combined with the WBS, the OBS is used to develop a responsibilities assignment
matrix (RAM), which clearly identifies which organization is responsible for each task in the schedule as
shown in Figure 5-5. RAMs are typically used to identify control accounts, which are described in the
following section, in support of the EVMS.
68
Typically for NASA P/ps, the WBS is the primary source for development of the CBS, with each control
account being consistent with a work package or detailed task.34 If composed with cost information, a
WBS may serve directly as a CBS. Otherwise, it may be loaded with cost information attributed to its
respective elements to create the CBS. A resource- or cost-loaded schedule can be used as a tool that
yields insight and assistance to the P/p management team in their management of weekly and monthly
“resource” allocations. It assists the P/p with the on-going evolution of P/p budget estimates that
satisfy various Agency, program, and P/p budget development needs. For example, cost loading ensures
the P/p has a complete and consistent performance baseline (or formal PMB, if applicable) that includes
integrated cost and schedule for all elements of the Work Breakdown Structure (WBS). Because it is not
uncommon for the cost-estimating tool and the IMS to differ in WBS at lower-levels, it may be necessary
to roll-up, or otherwise adjust, the cost estimate in order to align the WBS levels in both tools. When
cost and schedule are developed jointly, cost loading verifies alignment at the lowest level of the cost
WBS (schedule WBS is likely at a much lower level). The integration of programmatic (cost, schedule,
risk) and technical elements provides a better understanding of how programmatics are interrelated.
For example, the dependencies between cost increases associated with schedule slips (due to potential
risks, poor performance, uncertainty, or any other constraint) or possibly even cost decreases with
schedule duration reductions (opportunity, risk mitigation, additional funding, etc.) represent how an
overall plan may be affected by changes in any element. Additional information on Resource and Cost
Loading can be found in Section 5.5.12. Additional information the CBS can be found in the NASA Cost
Estimating Handbook, Appendix B.
In some cases, the schedule may or may not directly include elements of the CBS (i.e., resource or cost
loading the schedule). Depending on the specific cost models or estimating approaches the cost analyst
has chosen, the P/p WBS may not have sufficient granularity, or misalignment may exist between the
WBS and the estimating methods. Any adjustments that are made to the P/p WBS must be coordinated
with the P/p to ensure that the changes will not cause issues with understanding or communicating the
estimate.35
The CBS should be traceable to the P/p budget. Budget planning information (i.e., all estimated P/p
costs and obligations including FTEs, WYEs, ODCs, procurements, travel, facilities, and other costs for
each fiscal year during all phases of a P/p) is used during Schedule Management Planning and Schedule
Development, leading to an approved baseline IMS. Having a clear understanding of the budget, and
specifically, the funding that will be available is critical to establishing a credible Schedule BoE, as the
IMS should be traceable at some level to the P/p CBS. This information aids in determining IMS task
durations, interdependencies, constraints, and calendars.
It is imperative that the baseline IMS correlates to and is in agreement with all segments of the
integrated cost and schedule baseline, in order to establish a good baseline to which performance can
be measured. For instance, the PMB is the time-phased cost plan for accomplishing all authorized work
scope in a P/p's life cycle, which includes both NASA internal costs and supplier costs. The P/p's
34 NASA Cost Estimating Handbook, V4.0. February 2015. Appendix B. Page B-2.
https://www.nasa.gov/sites/default/files/files/01_CEH_Main_Body_02_27_15.pdf
35 NASA Cost Estimating Handbook, V4.0. February 2015. Appendix B. Page B-2.
https://www.nasa.gov/sites/default/files/files/01_CEH_Main_Body_02_27_15.pdf
69
performance against the PMB is measured using EVM, if required, or other performance measurement
techniques if EVM is not required. Figure 5-6 illustrates the how cost and schedule are linked together
to inform the PMB.
Figure 5-6. Cost and schedule estimates must be integrated to establish a credible PMB.
Having a clear understanding of the budget, and specifically, the funding that will be available to a P/p
and when is critical in establishing a credible IMS. Funding levels and phasing may restrict the amount
of work that can be done in a specific time period forcing the P/p to replan or, if severely constrained,
may lead the P/p to descope (i.e., minimize or delete some requirements). Thus, it is important to
understand whether the P/p scope can be accomplished per the available funding given the costs
associated with the planned work. Incorporating costs into a P/p IMS provides a time-phased spending
estimate (i.e., cost-loaded schedule) that can be compared to funding availability over time. If there are
misalignments, the P/p has the data needed to re-phase the planned work or to descope the
requirements to match available budget.
Caution. During the planning process, it is important to ensure the P/p commitments never exceed the
authorized P/p funding for a specific fiscal year and do not exceed the planned annual budget for
complete LCC. Remember that P/p funding and the P/p budget are different entities, but they are
related. NASA funding is incremental, almost always by FY, and refers to the dollars authorized for P/p
expenditure during that FY. On the other hand, a P/p budget plan refers to the value assigned to the
time-phased resources necessary to accomplish the scheduled effort. There should always be
integration between funding, planned budget, and the associated work content to be scheduled. In
Figure 5-7, an authorized budget plan is shown as a function of FY.
70
Cost Phasing versus Authorized Budget
200
180 Planned Annual Budget Level
160
140
120
100
Cost, $M
80
60
40
20
0
2015 2016 2017 2018 2019 2020
Fiscal Year
The P/S must work with the P/p to schedule the work such that the P/p estimated cost with
Management Reserve (MR) does not exceed the authorized budget plan for any FY. It is important to
understand the difference between MR and UFE.
• Management Reserve (MR). MR is an amount of the total P/p “budget” withheld for
management control rather than designated for accomplishment of a specific task or set of
tasks. MR is typically set aside for unforeseen and unplanned events. MR is not included in the
performance baseline (or official PMB, if applicable.) In other words, no scope is assigned to
MR. MR is typically held at the total P/p or contract level. MR should not be used to cover past
performance variances. Expected uses of MR include:
o Budgeting work that is within the P/p or contract scope (not for external changes);
o Replanning future work based on improved knowledge, such as work method/sequence,
make/buy decision changes, changes to planning assumptions, etc.
o Budgeting for the realization of known/unknown risks, or as buffer to offset risk
• Unallocated Future Expense (UFE). Although not included in the performance baseline (or
official PMB, if applicable), UFEs are the costs expected to be incurred but cannot yet be
allocated to a specific WBS sub-element of the P/p’s plan because the estimate includes risks
and specific needs that are not yet known. In other words, UFE is the “funding” that is provided
to accommodate the realization of risk and uncertainty associated with a cost or schedule
estimate. UFE may also be used for overruns and for changes within the scope of the P/p.
Management control of some or all of the UFE may be retained above the level of the P/p (i.e.,
Agency, Mission Directorate, or Program). These funds may ultimately be distributed to
mitigate the risk, to make the product work, or to accommodate cost or schedule growth, but
71
because not all risks or uncertainties will be realized, initial allocation of funds to particular WBS
elements would be premature. During a P/p’s KDPs, the Decision Authority typically determines
whether UFE is necessary, and documents the decision in a Decision Memorandum. For P/p’s
with a JCL requirement, UFE is typically established by exercising probabilistic techniques, and is
the portion of estimated cost required to meet a specified JCL. The JCL is further described in
Section 6.3.2.4, as well as in the NASA Cost Estimating Handbook, Appendix J.
5.3.5 Integrated Master Schedule (IMS)
The Integrated Master Schedule (IMS) is the “when” of P/p scope. It supports all internal NASA P/p and
all external contractor activities, when applicable. The purpose of an IMS is to provide a time-phased
plan for performing the P/p’s approved total scope of work and achieving the P/p’s goals and objectives
within a determined timeframe. Whether developed for a Program or project, the IMS contains tasks,
milestones, and interdependencies logically sequenced in a manner that accurately models the
implementation plan for all approved scope from P/p start through completion based on all P/p work as
defined/broken down by the established WBS. The IMS also provides management a vehicle which
enables integration of the approved P/p work scope reflected in the work breakdown structure (WBS),
cost estimate, and programmatic risks to ensure alignment with the P/p’s integrated performance
baseline (or PMB). This includes both government and contractor work. The detail is sufficient to
identify the longest path of activities through the entire P/p. Prior to establishing the baseline, the
schedule is referred to as the preliminary schedule or preliminary IMS; once baselined, it is the schedule
baseline or baseline IMS. Figure 5-8 shows an example of a schedule that integrates the “who”, “what”,
and “how much” of the P/p scope.
Figure 5-8. Integrated Master Schedule (IMS) example that ties together all aspects of the P/p scope.
The remainder of this Chapter discusses: (1) the Schedule BoE, which documents the ground rules and
assumptions (GR&A), constraints, and any other rationale that dictates how the IMS is developed, and
(2) the development of the IMS. The IMS is further discussed in Section 5.6.1.
Populating the Schedule Database necessitates the development and maintenance of a robust BoE in
order to ultimately produce an IMS that can be considered reliable. The BoE should include places for
basis rationale associated with all data that feeds into the Schedule Database. The BoE should complete
the trace from all basis rationale to primary sources of data used to develop the schedule by including all
referenced material in some form. This handbook therefore provides no specific BoE template or
format, though it is necessary that the P/p’s BoE always house both the current and past versions of the
IMS alongside counterpart incarnations that can be easily annotated and tracked by P/Ss, according to
the Schedule Documentation processes described in Section 8.3.3.
Note: Hereafter in this document, the schedule BoE dossier will be referred to simply as the “BoE”. It is
noted where the Schedule BoE should be differentiated from other types of “basis of estimate”
documents (like those pertaining to cost).
36NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020. Appendix A.
https://nodis3.gsfc.nasa.gov/npg_img/N_PR_7120_005E_/N_PR_7120_005E_.pdf
73
or project shall document the BoE in retrievable program or project records.”37 A table of the required
BoE maturity at given LCRs is provided in Figure 5-9.38
Formulation Implementation
KDP 0 KDP I KDP n
Loosely-Coupled
Uncoupled and
Initial (for Update Update Update for Update Update Update Update Update Update
range) (for (for cost and
range) range) schedule
estimate
Figure 5-9. Schedule BoE maturity requirements by P/p phase according to NPR 7120.5.
Although not explicitly identified in NPR 7120.8 as a required product for R&T P/ps, it is expected that a
BoE would exist at a maturity corresponding to the maturity of the IMS. It is also important to note that,
“R&T projects that directly tie to the space flight mission’s success and schedule are normally managed
under NPR 7120.5.” Figure 2-3 and Figure 2-4 provide an overview of the expected maturity of the BoE
for P/ps that adhere to NPR 7120.8.
BoE endures across the life cycle; its structure and maturity is phase-dependent. For Uncoupled and
Loosely-Coupled Programs, a preliminary BoEs is required at SRR, with a baseline at SDR, and updates at
subsequent LCRs. For Tightly-Coupled Programs, preliminary BoEs are required at SRR and SDR, with a
baseline at PDR, and updates at subsequent LCRs. For projects, preliminary BoEs are required at MCR,
SRR, and SRR/MDR, with a baseline at PDR, and updates required in conjunction with the maturity of the
schedule at each life cycle review. It is important to note that the BoE will evolve as the P/p matures.
For example, a project’s BoE for Phases E and F is not required until SIR.
Early in Formulation, the P/p may need to rely on historical data from past P/ps to estimate the overall
schedule duration. Pre-Phase A and Phase A typically use analogies provided by tools or databases that
37 NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020. Page 37.
38 While NPR 7120.8A does not specifically require a Basis of Estimate (BoE) product, projects that adhere to NPR 7120.8 may
benefit from the documentation of BoEs to support the required programmatic products.
74
store historical information, such as SMART, the Schedule Repository, CADRe, and NICM.39 A summary
of these tools and databases can be found in Section 5.5.9.3.2.
While it is likely that some historic data will be available for most P/ps, in the cases of new technologies,
there may be instances where analogous data does not specifically exist (e.g., new technology
developments, etc.). In these cases, the P/S and/or Schedule Analyst should increase the amount of
uncertainty, and perhaps the point estimate itself, to account for the lack of relevance to the analogy.
Any assumptions made concerning the derivation of uncertainty and durations should be documented in
the BoE and the IMS, as well as ensuring the IMS has the appropriate amount of float and/or margin.
In subsequent phases, documentation will have been generated specific to the P/p with more detailed
information, and the schedule will be more detailed by means of the rolling wave or similar approach.
By PDR, when the schedule is more detailed, defining all work required to accomplish the complete P/p
effort, the basis rationale for the selected durations, the duration uncertainties, and any other specific
attributes for the activities should be captured in the BoE. Attributes of the activities to include in the
BoE may include, but are not limited to:
• Activity owners
• Workflow logic
• WBS, OBS, and CBS identifiers (WBS Dictionary should provide appropriate description of the
work effort that is represented by schedule activities)
• Work package identifiers
• Shifts required
• Duration and duration uncertainty with associated rationale
• Other assumptions and/or constraints
Thoroughly documenting the BoE also aids the P/S in carrying out the Schedule Assessment procedures,
such as the Requirements Check, described in Section 6.2.2.1.1, and the Basis Check, described in
Section 6.2.2.2.2.
39The Cost Analysis Data Requirement (CADRe), the Schedule Management and Relationship Tool (SMART), the Project Cost
Estimating Capability (PCEC), and the NASA Instrument Cost Model (NICM) can be found on the One NASA Cost Engineering
Database (ONCE), www.oncedata.msfc.nasa.gov. REsource Data STorage And Retrieval System (REDSTAR) access can be
requested through the NASA Access Management System (NAMS), https://www.hq.nasa.gov/office/itcd/nams.html.
75
• P/p Plans (see Appendix I in NPR 7120.5 for the types of P/p Plans required and their associated
maturity by phase)
• Agreements and Authorization Documents (e.g. Formulation Agreement Document, Decision
Memorandums, etc.)
• Scope Definition
o Work Breakdown Structure (WBS) and WBS Dictionary
o Integrated Master Plan (IMP), if available
o Organizational Breakdown Structure (OBS)
o Cost Breakdown Structure (CBS)
• Cost Estimate/BoEs
o Program Planning Budget Execution (PPBE) data
o Request for Proposal (RFP) or Contract
▪ Task Agreements (TA)
▪ Data Requirements Document (DRD)
▪ Bill of Materials (BOM)
o Core Financial Business Warehouse Reports
• P/p Risk Information (Risk Management Plan) and Database
o Risk Statements and Context
o Risk Matrix, including Likelihood (probability) and Consequence (duration distributions)
o Risk Impact descriptions (cost, schedule, technical)
o Risk Mitigation descriptions (incl. cost and schedule requirements, technical aspects)
These sources provide critical insights regarding overall P/p scope and duration needed for developing a
schedule with a valid basis, such as: correct task sequencing, responsibilities, task duration, and
resource information.
It is important to realize that P/p personnel may not have the same interpretation or understanding of
the approved SOW. Resolving these differences is necessary for the development of an accurate and
useable schedule for P/p management. The P/S can play a significant role in helping to resolve these
differences by asking the right questions (e.g., In what WBS element does specific effort belong? What
type of deliverable is required? What type of testing is required?), and by bringing to light the areas of
conflict so that responsible managers can come to an agreement on the work scope. For example, the
P/S should always help the P/p team understand the necessary inputs (e.g., responsibility, in-house or
contracted effort, quantities, and facility requirements) to task and schedule definition, as well as the
inherent interfaces involved. In addition, and equally important to capturing the complete scope of the
P/p in Schedule Development is documenting any exclusions and risks to the P/p, as these are P/p
attributes that may affect the Schedule Management approach. Per NPR 7120.8, a Research and
76
Technology (R&T) P/p tends to define a cost/schedule structure rather than an LCC and associated end
date. Thus, it is important to understand the complete scope that will take the R&T P/p to its end date.
If relevant data or documentation has not been developed, the P/S should lead or otherwise galvanize
development of these documents. More information on how the P/S might interface with other PP&C
functions are included described in the PP&C Handbook.40 By initially gathering and understanding as
much of this data as possible, the effort will lead to a more accurate and meaningful schedule for use in
guiding P/p management.
5.4.3 Documenting the BoE in Conjunction with Developing the IMS
Documenting the basis rationale for Activity Attributes initializes the BoE, along with capturing primary
data sources. This activity should be done in conjunction with developing the IMS as defined in Section
5.5, as well as any capturing any related assumptions or rationale from the assessment of the IMS as
described in Chapter 6.
As the P/p continues through its life cycle and changes are made to the IMS, the rationale for changes
and supporting data should be captured within the BoE and tracked through the P/p’s change control
process, as described in Chapter 7.
40https://nen.nasa.gov/documents/879593/1386755/PP%2BC+Handbook+1-5-17.docx/097acedf-1df7-4676-b9c1-
4c0c1e83dc2e?version=1.0&download=true
77
Figure 5-10 illustrates how the implementation of the SMP results in a time-phased set of activities that
aligns the development and deployment of the IMS and associated products with the continuous
Schedule Management processes according to the P/p life cycle. The continuous Schedule
Management processes that are executed throughout the P/p life cycle according to three of the sub-
plans in the SMP include:
• Schedule Assessment and Analysis, Chapter 6
• Schedule Maintenance and Control, Chapter 7
• Schedule Documentation and Communication, Chapter 8
Figure 5-10. Schedule Development and its relationship to the other Schedule Management processes.
The primary output of Schedule Development is the IMS, although other Schedule Outputs, such as a
Summary Schedule or an Analysis Schedule may be produced, in addition to Schedule Performance
Measures. Schedule Outputs and Schedule Performance Measures are further defined in Section 5.5.13.
Technical
• Supports email capability of schedule data in native file formats
• Uses Open Database Connectivity (ODBC) or Dynamic Data Exchange (DDE) standards to
read/write to other databases
• Provides capability of saving data files such as MPX, DBF, XML, HTML, and X-12
• Provides online “help” capability
• Provides capability for creating PDF files or graphic files such as: jpeg, bmp, gif, or tif
Interface
• Supports data interface to chosen in-house EVM applications (e.g., Internally developed EVM
spreadsheets or commercial EVM applications)
• Supports data interface to EVM data analysis applications
• Supports data interface to schedule risk analysis applications, such as range estimates and JCL
analyses
79
MS Project and Oracle Primavera P6 are common scheduling tools that are used throughout the Agency
and can handle differing levels of schedules and integration with other PP&C tools and data. Both on-
site and cloud versions (such as MS Project Server) generally meet the capabilities listed above.
Developing the table of Activity Attributes is available as a prerequisite to Schedule Development. The
table specifies the fields needed in the IMS to interface with other P/p management process as well as
provides additional information that the P/p needs. The fields need to be allocated during the Schedule
Development process. The minimum set of Activity Attributes that require coded fields in the schedule
are shown in Figure 5-11.
41 GAO-16-89G. GAO Schedule Assessment Guide. Page 24. December 2015. http://www.gao.gov/assets/680/674404.pdf
80
Activity Attributes
Program/Project: Date:
Activity ID: This information comes from the Activity: This is the name of the activity from WBS No: This identifies where this activity
project activity list. the project activity list. can be found in the WBS.
Activity Description: This information includes a detailed description of the work to be performed for this activity and should be consistent
with what
Activity is provided inThis
Responsibility: the section
project lists
activity
wholist.Resources and Skill Sets Required: This section describes the resources needed to perform
is responsible for executing the work the work. For human resources this section should include necessary skill sets and skill levels
associated with this activity. required to complete the work.
Activity Predecessors: This section lists other Predecessor Link Relationship: This describes Predecessor Dependency: This section
activities which must occur before this if the predecessor has a start-start, finish- describes any dependencies on predecessor
activity. start, or other type of scheduling activities like lead/lag times or other
Activity Successors: This section lists other Successor Link Relationship: This describes if Successor Dependency: This section
activities which must occur after this activity. the successor has a start-start, finish-start, or describes any dependencies on successor
other type of scheduling relationship. activities like lead/lag times or other
Type of Effort: This section if the work for this activity is a level of effort, fixed effort, fixed duration, apportioned effort or other type of
work.
Location of Activity: This section describes where the work for this activity will be performed.
Activity Assumptions: This section lists all assumptions associated with this activity. These should also be included in the project's
assumption log.
Activity Constraints: This section describes activity constraints such as firm milestone dates, resource constraints or any other identified
constraints which may impact this activity.
Activity Uncertainty Duration Min: This Activity Uncertainty Duration M/L: This Activity Uncertainty Duration Max: This
section lists the minimum number of section lists the most likely number of section lists the maximum number of
uncertain days that can impact the activity. uncertain days that can impact the activity. uncertain days that can impact the activity.
Risk Likelihood: This section lists the Risk Description: This section provides details about the causes and effects of the risk.
likelihood of a risk's occurrence. Please note that there can be multiple risks.
Activity Risk Duration Min: This section lists Activity Risk Duration M/L: This section lists Activity Risk Duration Max: This section lists
the minimum number of days that the risk the most likely number of days that the risk the maximum number of days that the risk
can impact the activity. can impact the activity. can impact the activity.
There are an infinite number of field and field codes that either exist or can be created. The tools used
along with the P/p characteristics determine the limitations on this parameter. The number of field
codes needed for a P/p will vary widely, depending on many factors such as P/p size, maturity, industry
or technology, complexity, entities involved, phase, and so on. The appropriate number of field codes to
use is the number required to effectively and efficiently manage the P/p. The same can be said for the
types of field codes to use. Field codes can be set using different code types, such as Flag, Number,
Text, and Date codes. Commonly-used field codes include:
• WBS. The WBS defines hierarchical organization of the work to be executed by the P/p team.
The WBS is an important coding structure that when incorporated into the IMS, aids in
extracting and formatting desired schedule data.
• Control Account Code (and/or CWBS). Although it may be different the control account code,
or CWBS, helps to map work scope to the control account from which the work is funded. Each
control account may be mapped to more than one work package, but each work package can
only be mapped to one control account. Coding schedule activities by control account is a
management tool to integrate scope, budget, cost, and schedule and can help facilitate earned
value performance measurements.
• Responsibility Code. The responsibility code can be used in a number of ways, such as a
reference to a responsible person, team, or group (e.g., Technical Lead, CAM, IPT, etc.). This is
not the same as the OBS, but rather a more specific identifier. The field code can be useful for
81
sorting and grouping of data into distinct work groups. The result may then be used to collect
status, plan resources, or to communicate a group’s work plans.
• Phase Code. The phase code may be defined in different ways, but usually as a logical grouping
of work that flows along the P/p timeline, more or less sequentially. For example, one such
definition may result in phases such as Engineering, Procurement, Fabrication, and Testing. It is
often used to organize data to facilitate interface-planning efforts and produce summary-level
reports.
• Activity Type Code. The activity type code helps to distinguish between schedule activities that
have different functions within the IMS, such as milestones, regular activities, LOE activities,
summary activities, “margin activities”, or other “placeholder activities.” Activity type coding
can facilitate Schedule Assessment, Analysis, and Control by making it easier to filter through
activities of interest. It is a recommended practice that any “activity” other than a regular
activity be coded appropriately.
• EVM Codes. The EVM code helps to filter on tasks that are used as inputs to EV metrics. the
selected IMS tasks’ Unique Identifiers (UID) are associated with a coding structure, which ties
into the EVM software used. The coding structure identifies the start and end of tasks that
support a milestone within the EVMS. The coding structures can be as basic or as
comprehensive as necessary for the P/p’s needs.
• Other Commonly-Used Codes. P/p activities may also be coded for consistency with such
information as the related contractor, location, phase, contract line item number (CLIN), work
package number, CAM, and SOW paragraph as applicable.42 Other commonly-used or
customized codes may include Activity ID, Area, System, Department, Step, Priority, Resource
Names, Resource Costs, Uncertainties, Risks, etc.
P/p management may occasionally be faced with an opportunity to become creative with regard to
coding of data. For example, schedules may need to be constructed that are adaptable to special
requirements specific to a particular report or action tracking product. In some cases, a request for
isolating a particular requirement, design, fabrication, or test phase may be requested. Most scheduling
software tools are flexible in allowing field customization for filtering, sorting, and grouping to enable
displaying specific criteria. Coding may become more informal in these cases but should still be
documented. It is a recommended practice to maintain a coding dictionary, or some equivalent
documentation, to capture field code information. In larger P/ps this document should be incorporated
by reference or inclusion in other applicable P/p documentation with changes controlled appropriately.
Once particular field codes are defined for use in a P/p, it is a recommended practice that the field code
value be used consistently for all related P/p data. Consistency is the key to a successful data structure
and coding scheme. For example, if the resource abbreviation “E” for “Engineers” has been established,
this resource abbreviation should be used in all places where a resource abbreviation for “Engineers” is
required. There may be occasions where this practice may not be practical, or even possible, due to
42 GAO-16-89G. GAO Schedule Assessment Guide. Page 24. December 2015. http://www.gao.gov/assets/680/674404.pdf
82
system limitations or incompatibilities. In these scenarios, a cross reference table can be created to
relate pertinent codes. Continuing the example above, if the P/p’s payroll tool uses the abbreviation
“Eng” for “Engineers,” but the tool has a limit of only one character for the “Engineers” resource, it may
be necessary to use “E” for Engineers in the scheduling tool. The cross-reference table in Figure 5-12
would then contain the following entry:
It is important to maintain the integrity of the data structure while enabling various users with varying
needs to query the data effectively and efficiently. Coding enables various forms of filtering, such as the
grouping or sorting of data without altering the structure. Grouping refers to the gathering of data that
share some common characteristic. Sorting refers to ordering data in an arrangement that differs from
the natural order as stored in the database. Users may find it useful, for example, to order activities by
the planned start date. Sorting by planned start in ascending order would generate a list of activities in
order they are scheduled to be worked. In some situations, it may be a desirable to group together
schedule activities that use the same resource for certain reports, for example. Grouping by values in a
resource code field would enable this function. Or, it may be desirable to group schedule tasks
together that use the same Center code for a report that would show certain data summarized by
Center. This will aid P/p personnel that need to reference or read schedules without having a detailed
understanding of the scheduling tool being used.
The use of field codes can aid in the integration of P/p management (e.g., PP&C functions). If EVM is
required, it is a recommended practice that schedule data be coded with the set of NASA-identified EVM
fields, at a minimum, since the complete set of EVM milestones comprises the PMB from a schedule
perspective. The EVM-required field codes are provided on the SCoPe website.43 Figure 5-13 shows an
example of a schedule coded with an interface to the Risk Management System, two interfaces with the
Earned Value Management System (EVMS), and an identification field listing the CAM for each activity.
The project shown had approximately 20 field codes defined for information and interface with other
processes. The intrinsic fields are those that are “hard-coded” in most scheduling software. Other
important attributes often used, but not shown in the example are:
• From the Organizational Breakdown Structure (OBS), a code that identifies the organization that
is responsible for execution of the activity. Example: Power and Propulsion Division, code PPD.
• For contracted activity, a code that identifies the contract and/or a code that identifies the
contractor.
• For contracted activity, a Contract Line Item Number (CLIN).
• Flags are commonly used to identify specific classes of activities to enable quick search, e.g.,
notification milestones, control milestones, target milestones, interface milestones, critical
activities, etc.
• Sometimes integration working groups are assigned to a collection of activities that are closely
related such as APW, Avionics, Power and Wiring.
84
In a similar fashion, but not shown here, the cost database for the IMS has fields that need to be
established for the cost interface including resource name, resource costs, uncertainties, and risk
functions from the Risk Management System.
Although CPM scheduling offers a visual, time-phased representation of P/p activities, the P/S should be
aware that it does have limitations when used in its most basic form:
• Based on only deterministic task duration; does not consider duration uncertainty
• Less focus on non-critical tasks that can cause risk
• Does not consider resource dependencies; assumes resources are free when needed
• Misuse of float/slack (work expands to fill the time)
• Early finishes (time gains) not effectively being used by subsequent activities (typically due to
early start dates not accounting for resource availability or lack of resource-informed schedule)
Thus, it is a recommended practice for NASA P/ps to use CPM scheduling in an expanded form to include
the integration of other programmatic aspects (e.g., cost, risk, etc.), which provides a more integrated
and holistic representation of the P/p. For instance, CPM scheduling is more effective and informative
when risks and resources and/or costs are integrated into the schedule. If management focuses solely
on critical activities without taking into account critical resources, it risks ignoring or overworking a P/p’s
85
most valuable assets and potentially jeopardizing the P/p’s timely completion.44 If management does
not consider the potential discrete risk impacts to a schedule, it may not be managing to the path most
likely to delay the P/p.
44 GAO-16-89G. GAO Schedule Assessment Guide. Page 87. December 2015. http://www.gao.gov/assets/680/674404.pdf
86
Document the BoE for the Scheduling Hierarchy
Because the IMS requires traceability to the WBS, it is important to include in the BoE rationale for any
activity flows that are not organized according to the WBS hierarchy. Providing justification for the
hierarchy used in the schedule will aid in resource allocation and understanding the critical path, as well
as tracing accountability to the appropriate Technical Leads.
The Requirements Check, Procedure 1, should be performed at this juncture to ensure IMS traceability
to the P/p’s requirements set, which includes the WBS, as described in Section 6.2.2.1.1.
Activity nomenclature convention or methodology should be established at the beginning of the P/p and
adhered to throughout the P/p life cycle. It is a recommended practice that summary activity names be
aligned with the WBS naming structure for easier traceability. A P/p may elect to use a “noun, adjective,
modifier” or “modifier, adjective, noun” convention of for all summary activity descriptions and a “verb,
adverb, modifier” or “modifier, adverb, verb” convention for all discrete measurable activities. For
example, detailed (non-summary) activity descriptions should contain a “verb” so it is completely clear
what the accomplishment of work being scheduled is (e.g., “Fabricate CM Front Bay Access Panel
BR549”), whereas summary-level activities descriptions should not contain a verb and be more “noun”
oriented (e.g., “CM Fabrication of all Access Panels”). This type of standardized approach, if used
consistently, will make efforts such as reporting or searching the Schedule Database and IMS much
easier.
A key consideration for capturing all scope in the schedule is making the schedule understandable and
easy to follow and use. While the content of each task in the schedule may be perfectly clear to the P/S,
it must also be absolutely clear to the PM, Technical Lead (e.g., WBS Element Owner/Control Account
Manager/Integrated Product Team Lead), and other P/p personnel (e.g., PP&C personnel). The phase
and maturity of a P/p often dictate how the scope is modeled in the schedule. Figure 5-15 provides an
overview of the expected schedule content and maturity at each phase of a Single-Project Program or
project:
Pre-Phase A Pre-Phase A schedules should include major development and integration milestones
representing: key milestones, project reviews, integration points, external and internal
Concept
interfaces or handoffs, and deliverables. Additionally, it is expected there should be high-level
Studies
summary tasks reflecting the general time-phasing estimated for developing system/mission
requirements, hardware design, fabrication, integration & test, and operational capabilities.
These early, high level summary estimates are typically derived from parametric models or
historical data from past similar projects. However, it should be noted that detailed information
should be available and included at a discrete and measurable level of detail for each concept
study that may be involved during this incremental phase.
Phase A Phase A preliminary schedules should have significantly more detail than the Pre-Phase A
schedules. During this phase of Formulation, the mission/system concept definition is
Technology
completed, most concept and trade studies are completed, preliminary requirements are
Development
established, and a preliminary Project Plan is developed. Therefore, project definition becomes
clear enough during Phase A to allow for a more discrete breakdown of work tasks and
milestones. Milestones should have predecessor and successor activities. A preliminary critical
path should be identifiable; there should be reasonable slack on the activities. Funded schedule
margin should be included, and resources should be identified. Additional unfunded margin
activities may also be included. The phased schedule should be synchronized with the project
phase budget. Preliminary requirements by subsystem, remaining trade studies, preliminary
and final design by subsystem, long lead procurements, preliminary systems engineering
products, preliminary safety and mission assurance products, fabrications by subsystem,
subsystem and system integration flow, subsystem and system testing, documentation
development, flight simulations software development and deliverables, hardware development
and test, test operations development for ground and flight should all be identified in the
schedule during Phase A.
88
Phase B Phase B is the final incremental phase of Formulation, which should produce the necessary
project definition to allow for discrete and measurable IMS detail, at least for the near-term of
Preliminary
six to twelve months. Near-term effort should be scheduled in meaningful tasks with shorter
Design and
durations. Durations not exceeding one month are preferable. IMS task detail down to the level
Technology
where work is discretely planned and measured at the lowest levels of the WBS and potentially
Completion
lower where necessary (i.e., subsystem, component, software function, test phase,
procurement deliveries, GFE deliveries, interface points, facility modifications, miscellaneous
documentation development stages, preliminary orbital debris assessment, etc.). A rolling wave
approach for planning the out-years may be used providing that the total scope of the project is
identified within the schedule and that all WBS elements are included. Durations for the out-
year planning phases can be further decomposed as the schedule matures. However, in cases
where far-term effort is well defined and task information is already available at the above
described low level of detail, then it should also be included in the IMS at the earliest
opportunity. Phase B schedule baselines are the foundation for measuring project schedule
performance throughout implementation. Reporting and other schedule management criteria
should be in place and in practice by the project. Regular status updates, reporting and
performance analysis should be taking place in the project office. The schedule should be
detailed enough to accommodate the collection of actuals (time and cost) at the appropriate
WBS level. The IMS will receive final baseline approval at the end of Phase B. The baseline will
then serve as the EVM performance measurement baseline.
Phase C Phase C is subject to the same guidance as Phase B. As time proceeds, far-term work tasks with
longer durations should be broken down into clearly defined and meaningful tasks with shorter
Final Design
durations (not exceeding one month, or potentially shorter). Special focus should be given to
and
providing clear schedule visibility into the completion of final design by specifying tasks and
Fabrication
“release milestones” for specific design or component-level drawings.
Fabrication tasks should clearly delineate the necessary work steps that reflect the planned
manufacturing work flow. IT development should clearly provide detailed tasks for software
functional design, code, debug, unit and integrated testing, software verification and validation,
IT hardware development, integration, and test. Specific tasks for Quality Assurance and buy-off
should also be clearly identified, as well as, orbital debris assessment baseline documentation.
Product delivery milestones from various fabrication process completions should reflect the
necessary handoff points to hardware assembly and systems integration.
Phase D The above Phase C guidance also applies to Phase D. Again, as time proceeds, far-term work
tasks with longer durations should be broken down into clearly defined and meaningful tasks
System
with duration lengths similar to those recommended in Phase C. Special focus should be given
Assembly,
to clearly defining the discrete flow of tasks necessary for requirements verification and for
Integration
hardware and software components to be assembled and then integrated into subassemblies,
and Test,
subsystems, and systems, reflecting the work required for final assembly, integration and test.
Launch and
Schedule detail for this phase should clearly delineate the necessary and measurable work steps
Checkout that reflect the assembly, integration and test flow of work. Specific tasks for Quality Assurance
and buy-off, as-built hardware and software documentation, final systems acceptance reviews,
operations procedure finalization, and Operations training, and certification should also be
clearly identified. Specific hardware deliveries for Launch Operations activities should be
included. It should be noted that all pre-launch work should be verified and closed by the Flight
Readiness Review (FRR), which precedes KDP E.
89
Phase E The focus of the schedule for the incremental Phase E is the definition of tasks for execution of
the Mission Operations Plan: final verification and validation reports, flight readiness reviews,
Operations
final processing of launch hardware, ground operations, service preparation for launch, launch
and
activities through achieving operational orientation, or-orbit activities relating to mission
Sustainment
tracking, commanding, telemetry, trajectory, systems analysis, mission payload initialization
sustainment. Operations tasks with longer durations should be broken down into clearly
defined and meaningful tasks with shorter durations. Special focus should be given to clearly
defining the discrete flow of tasks necessary for Launch Operations and Sustainment.
Phase F The final phase, Phase F should also be defined in the same discrete and measurable level of
detail as described above. The focus of this incremental phase should address tasks such as: on
Closeout
de-orbit preparation and execution, abandonment of in-place flight hardware, recovery of
project assets, data/equipment disposition and storage, final environmental impact disposition
and resolution, lessons learned, contract closeouts, and final public education and notification
of reporting.
Figure 5-15. Relationship between the NASA life cycle phases and project schedule content.
All identified milestones should be incorporated into the IMS. P/p notification and control milestones,
which are used as control points for work scope performance, are typically defined during pre-Phase A
in the Milestone Registry and may also be identified as events in the IMP, if it exists. The IMS should
always include a start milestone, which is a predecessor for the work activities at the beginning of a P/p,
90
as well as a finish milestone, which is the successor to all logic paths at the end of the P/p. Milestones
may also be used to identify major P/p events, such as LCRs, KDPs, or major test events. Contractual or
acquisition milestones (e.g., procurements, hardware deliveries, etc.), interface milestones, and
programmatic milestones should also be included in the IMS, if available. Locating P/p milestones at the
top of the IMS also helps to facilitate analysis. In most cases, milestones should be tied to or represent a
specific product deliverable or event and should have clear, objective (quantifiable) criteria for
measuring accomplishment.
It is necessary to keep in mind when developing the IMS, every schedule activity will eventually be
updated. The identified activities should facilitate the measure of progress. Schedule data that is task-
oriented lends itself to a more meaningful approach to monitoring task progress through the Schedule
Maintenance and Control process, as each activity is easily identifiable for updating purposes.
45 GAO-16-89G. GAO Schedule Assessment Guide. Page 14. December 2015. http://www.gao.gov/assets/680/674404.pdf
91
impact other P/p management process. Task-oriented activities should be sufficiently detailed to allow
for the practical establishment of defined finish-to-start network logic relationships. A lack of clear
understanding of the effort involved in each task can make Schedule Assessment and Analysis
cumbersome. However, using an excessive number of logical relationships to the same task or
milestone complicates schedule analysis.46
One exception to having consistency in schedule granularity occurs when it is important to know exactly
which detailed activities (and associated costs) are the most affected by risk and therefore constitute
the critical or driving paths. It a recommended practice that high risk and/or high cost areas within the
P/p should reflect more task detail within the IMS to support Schedule Analysis. Another example
would be the addition of tracking milestones that would be used in some of the performance
measurements. P/Ss should keep in mind that the level of detail used must lend itself to meaningful
cost/schedule and schedule/risk integration. It should also be noted that the level of schedule detail
may need to facilitate the type of EV measurement technique (e.g., 0-100, 50-50, weighted milestones,
percent complete, level-of-effort) that will be assigned in each earned value work package, which are
discussed in the NASA EVM Handbook. In addition, some placeholder activities that are not defined in
the WBS, nor captured in the BoEs, may need to be added to support other P/p process interfaces, such
as the Business Management System, EVMS, and Risk Management System or P/p-specific tracking
tools.
• Rolling Wave. Rolling wave planning, is a method that allows for scheduling to occur in waves,
through progressive elaboration, adding more detail as the P/p evolves and work activities
become clearer. The rolling wave method involves the use of both detailed and summary tasks
and can be applied to CPM scheduling. Rolling wave planning is useful when dealing with long
development or repetitive production schedules. It is also is widely used across NASA P/ps in
conjunction with EVM techniques, as illustrated in Figure 5-16.
46
PASEG, Version 3.0. National Defense Industrial Association (NDIA), Integrated Program Management Division (IPMD).
March 9, 2016. Page 61.
92
Figure 5-16. An example of the rolling wave planning approach used in conjunction with EVM.
When using the rolling wave method, near-term tasks (i.e., activities within 6-12 months of the current
date) are planned to a lower, discrete level of detail. It is a recommended practice for schedule activity
durations to be less than two times the update cycle (e.g., less than two months) for near-term
activities, as this allows for reporting of the start finish of an activity within one or two update cycles,
allowing management to focus on performance and corrective action if needed. Keeping durations to
two months or less will certainly benefit P/ps where EVM is being employed and should result in
increased accuracy in performance data. Tasks with durations longer than two months tend to make
measurement of objective accomplishment more difficult to assess accurately. Exceptions to this
recommended practice include procurement activities (e.g., long lead items) or level of effort (LOE)
activities (e.g., administrative support).47 This approach also enhances the P/S’s ability to more
accurately identify the P/p critical path.
Tasks that are scheduled to occur farther into the future should be included in the schedule but may be
planned at a more summary level of detail or planning package level. These summary tasks, while
reflecting less detail, should still provide enough definition of future work to allow for effective
identification and tracking of the P/p critical path or other driving paths. However, rolling wave planning
should not be used as a way around reflecting the most meaningful level of detail anywhere in the
schedule if the information is already known. Tasks should be developed to a discrete level of detail as
early as possible in the P/p life cycle to help to identify and mitigate P/p conflicts, risks, and problems.
This is particularly important for EVM purposes, as no Performance Measures are taken on planning
packages, only work packages. Durations should be revisited periodically as work progresses and as new
information becomes available. Thus, as future summary-level tasks (or planning packages) come into
the near-term window, they should be planned to a greater level of discrete and measurable detail and
incorporated into the IMS. In addition, the use of the rolling wave approach should be defensible and
supported by the BoE, since it is quite possible that future detailed planning will reveal situations that, if
known earlier in the P/p, could have resulted in more efficient and less costly work plans.
There are a number of integration points within any P/p development flow, for example SDR, PDR, CDR,
SIR, Begin ATLO, etc. It is a recommended practice that summary-level flow diagrams be developed as
an aid to facilitate the assignment of schedule dependencies. Laying out summary-level flow diagrams
can help establish the flow of activities in early schedule development and is often employed before
schedule activities are developed to the lowest level of detail (Section 5.5.7). Flow diagrams are useful
not only as a guide for linking activities, but also as a communication tool to aid in the P/p’s
understanding and reporting of the activity relationships. Figure 5-17 is an example flow diagram from a
NASA project.
94
← Battery
← PDDU
← PAPU
← C&DH Box 1 & 2
← Struct and prop assy ← NGIMS ← Solar Array s (SA)
← Harnesses ← PFP ← MLI Blankets 1
← FSW 4.0 ← RS
← Electra
← FSW 5.0 ← MLI Blankets 2
← FSW 6.0
Figure 5-17. An example of a flow diagram showing where spacecraft, instrument subsystems and software drop into the
system integration and test flow.
Flow diagrams generally show major integration points and which subsystems and/or components flow
into them. For example, in the first box on the left of the figure, the major structure and the propulsion
system are integrated. Also included in that first step are supporting subsystems and subsystems
requiring early integration due to access problems. After these subsystems are integrated, various tests
are performed. Following that step, the remaining spacecraft subsystems and some of the instruments
are integrated and tested. The process continues through the remaining steps. These steps follow
repetitive cycles of “integrate then test”, culminating in the last step leading up to launch vehicle
integration and launch.
Once the overall workflow is sufficiently understood, activity dependencies can be assigned. Every
milestone and activity in the schedule should have at least one predecessor and at least one successor
(i.e., no “open ends”). Two acceptable exceptions are the P/p start milestone, which has no
predecessor, and the P/p finish milestone, which has no successor. Another exception to this rule may
occur for activities or milestones that represent receivables or deliverables (Rec/Del) as described
below. Any other instances only occur with valid reasons that are accurately documented.
Activities should also not be arbitrarily restricted but should be logically linked such that progress driven
effort determines remaining duration. Predecessor and successor relationships should be appropriate
to the work needing performed and supported by the BoE. Redundant links should be avoided since
they often confuse workflows and complicate analysis ((e.g., if Task A is linked to Task B, Task B is linked
to Task C, and Task A is linked to Task C, then the link between Task A and Task C is a redundant link.
Logic should never be assigned to summary activities, as summary Start and Finish Dates are derived
from the detailed activities.
95
Note: MS Project allows for two scheduling modes: “Manually Schedule” and “Auto Schedule”.
Manually Schedule is sometimes used for tasks lists or at the start of a P/p, when constraints or
predecessors/successors are unknown, but an output with an overview of key dates is desired.
Manually Schedule calculates the schedule based on dates entered by the P/S rather than the
predecessors and their constraints. Auto Schedule is where the benefits of the scheduling tool tie into
the Schedule Management process, as it considers the constraints and applies the logic to the
relationships entered by the P/S to calculate the schedule.
Receivables/Deliverables
Rec/Dels, also known as givers/receivers, formally document the schedule interfaces and handoffs of
critical items or products. The P/S may choose to maintain the Rec/Dels at a separate section near the
beginning of the IMS for easier visibility, as shown in Figure 5-18, or embedded in the workflow of the
schedule and supporting schedules, as sown in Figure 5-19 and Figure 5-20.
Figure 5-18. An example of a receivables/deliverables maintained near the beginning of the IMS.
96
Figure 5-19. An example showing Rec/Dels as they would appear within the workflow of a Single Consolidated P/p IMS.
97
Figure 5-20. An example showing a series of Rec/Dels as they would appear between multiple subsystems. Linked properly, the
Rec/Dels can ensure all work is accounted for from the Mechanical Subsystem, to Optics, and finally to Laser.
Level-of-Effort Activities
LOE activities require careful consideration when being linked in the schedule as to not drive discrete
work activities. LOE activities must be carefully modeled so that they do not inadvertently define the
overall length of the P/p and drive the critical path. “For example, P/S may choose to avoid the use of
logic links on LOE activities or they may create LOE activities that are one day shorter than the actual
planned P/p length. Because these techniques are used to circumvent the impacts of long-duration LOE
activities on traditional critical path calculations, their use and implications should be thoroughly
98
documented in the schedule narrative and BoE documents.”48 Nonetheless, hammock tasks can have
descriptions, codes, calendars, resources, costs and other attributes of a normal activity. Hammocks are
very useful for carrying time related costs and determining the duration of supporting equipment
needed for a P/p, as well as being used to create summary reports, which support Schedule
Communication.
Activity Relationships
There are four relationship models for the activities:
• Finish-to-Start (FS). The successor activity cannot start until the predecessor activity has
completed. This is the most common linkage because it follows the most common work flow;
complete an activity and hand off the work to the next activity. It is a recommended practice
that FS relationships be used as often as possible when establishing schedule logic. This
relationship provides for the most accurate calculation of total float.
• Start-to-Start (SS). The successor activity cannot start until the predecessor activity has started.
This relationship is used when two activities need to begin at the same time. This linkage is
commonly used at major integration points or at the beginning of the P/p. For example,
Authority to Proceed (ATP) may trigger the start of a large number of activities. In most cases
this relationship will be used with a lag value. Caution should be taken when using this type
relationship in lieu of breaking the effort down into more meaningful and discrete segments of
work that can more accurately represent the task sequence. Overuse and/or improper use of
start-to-start relationships will potentially hinder true critical path identification.
• Finish-to-Finish (FF). The successor activity cannot finish until the predecessor activity has
finished. This relationship is used when an activity needs to finish and provide something to
another activity so that it too can finish. Sometimes activities may need to be constrained to
finish at the same time because of a coordinated handoff of work to a successor activity. The
same caution as noted for start-to-start relationships also applies to the overuse and/or
improper use of finish-to-finish relationships.
• Start-to-Finish (SF). The successor activity cannot finish until the predecessor has started. For
example, the first step in the predecessor activity may generate a fit-check template needed
before the successor can start. This relationship is very uncommon, and caution should be
exercised before using this relationship to ensure its use is valid.
Figure 5-21 is a screen shot from MS Project showing examples of the four linking techniques and how
each impact the flow of the work.
48 GAO-16-89G. GAO Schedule Assessment Guide. Page 14. December 2015. http://www.gao.gov/assets/680/674404.pdf
99
Figure 5-21. Examples of the four activity logic relationships.
Leads and lags often mask lower-level activities that should have been defined. Figure 5-22 provides an
example from MS Project showing how the leads and lags affect the successor-predecessor
relationships.
100
Figure 5-22. Example of the use of leads and lags as illustrated in MS Project.
There are instances where these types of relationships do exist and are reflected accurately by the
correct use of lag and lead times (e.g., cure times on concrete pours, bake-out times for printed circuit
board coating, and procurement order lead times). However, in most cases it would be preferable to
use an additional task, appropriately labeled, to represent the lead or lag time and to describe the
reason for the lag or lead (e.g., handoffs). This latter practice facilitates visibility and status updates and
would likely result in a more accurate and maintainable schedule. In other cases, it may be appropriate
to use a soft constraint. If using leads and/or lags, it is a recommended practice for justification to be
provided in the task notes or in a separately identified field within the IMS and/or Analysis Schedule.
Caution. Lead and lag times should only be used when these values represent real situations of needed
acceleration or delay time between tasks. Use of these techniques creates a maintenance issue should
the basis for the lead or lag time change. Lead and lag times are difficult to identify and document,
conceal actual activities that are not defined, and are often difficult to discern when analyzing a
schedule, as well as when performing an SRA or ICSRA. They may also corrupt float/slack calculations
and distort the critical path and driving paths. They also hinder the ability to capture EVM metrics (e.g.,
EVM cannot be taken against a lag because there is no scope associated with a lag).
It is a best practice for the schedule logic to limit the use of constraints other than As Soon As Possible
(ASAP) to situations that represent actual work flow. Soft constraints are preferable to hard
constraints or the use of leads or lags and may be used to delay the Start or Finish of a task because they
101
do not interfere with the logical flow of the schedule or the critical path calculations. For example, a
soft constraint might be used to delay the start date of a task to the expected availability date of data,
materials, or other resources that are not reflected in the IMS. Common soft constraint types that can
be imposed on an activity include, but are not limited to, the following:
• As Soon As Possible (ASAP). An Activity or Milestone will finish as early as possible based on its
assigned logical relationships and duration. This condition can also be described as the absence
of any constraint.As Late As Possible (ALAP)*. An Activity or Milestone will finish as late as
possible without affecting the schedule end date. It is a recommended practice that the ALAP
constraint never be used (specific to MS Project). This constraint uses total float to calculate its
Early Finish date instead of free float. This can cause the P/p end date to slip.
• Start No Earlier Than (SNET) or Start On or After. An Activity or Milestone will start no earlier
than the assigned start date. However, it can start as late as necessary. This constraint is often
used to phase the activities such that they align with budget allocation. Sometimes they are
also used to align the activities with the availability of a facility.
• Finish No Earlier Than (FNET) or Finish On or After. An Activity or Milestone will finish no
earlier than the assigned finish date. However, it can finish as late as necessary.
Hard constraints can prevent the logical flow of the schedule relationship logic, distorting the total float
(slack) and critical path calculations throughout the P/p. Hard constraints should be avoided except
where absolutely necessary. Instead, consider using soft constraints or deadlines. For awareness, hard
constraint types include:
• Start No Later Than (SNLT)* or Start On or Before. An Activity or Milestone will start no later
than the assigned start date. However, it can start as early as necessary. Finish No Later Than
(FNLT)* or Finish On or Before. An Activity or Milestone will finish no later than the assigned
finish date. However, it can finish as early as necessary. This is a useful constraint to use for a
contract deliverable milestone or P/p completion milestone.
• Must Start On (MSO)* or Start On or Mandatory Start. An Activity or Milestone will start on
the assigned date. Use of this constraint overrides schedule date calculations driven by logic,
resulting in a date that may be physically impossible to achieve.
• Must Finish On (MFO)* or Finish On or Mandatory Finish. An Activity or Milestone will finish on
the assigned date. Use of this constraint overrides schedule date calculations driven by logic,
resulting in a date that may be physically impossible to achieve.
Note(*): These types of constraints are often used as completion points in the schedule from which
the total float value is calculated. These constraints are often called “Hard Constraints” because
they can inhibit the correct time-phasing of the schedule. Improper use can cause negative float to
be calculated throughout the schedule.
Deadlines allow the P/S to place a target completion date on a task or milestone and do not interfere
with the logical flow of the schedule network.
• Deadline (MS Project only). While not listed as a constraint type, a deadline date assignment on
any task or milestone has the same results as assigning a “Finish No Later Than” or “Must Finish
On”, without compromising the ability for schedule logic to drive the schedule. Float (slack) is
calculated against the deadline date as if it were a “FNLT” or “MFO” constraint, generating
102
negative float when the task slips past it, but still allows for accurate critical path analysis
against important events in time. Figure 5-23 shows a deadline date represented by a
downward pointing green arrow. The finish milestone will be allowed to slip past it; the
deadline will not affect how the software schedules the tasks. However, total slack will be
calculated against the deadline date.49 P/S may choose to use deadline constraints in the
routine (e.g., monthly) schedule status update process to inform management of impending
issues, but deadlines should be removed from the approved IMS because of the hindrance they
may cause in identifying and managing the P/p’s true critical path.
Minimal use of constraints, other than ASAP, is strongly encouraged. Remember that constraints
override task interdependency relationships. Examples where constraints generally have a valid
purpose include the following:
• Assigning a “Start No Earlier Than” on a scheduled receivable from an external source
• Assigning a “Start No Earlier Than” when resources will not be available until a specified date
• Using a “Finish No Later Than” or “Deadline” on the final product deliverable or P/p completion
point
Constraints may also refer to limitations or conditions that affect the schedule. Typical examples of
these situations may include test facility downtime or unavailability of specialized computer
time/equipment. Take note that for schedules that are resource loaded, these situations are normally
best modeled through the use of resource calendars/assignments within the automated scheduling tool.
49In MS Project should any predecessor push a task with a deadline that has zero slack, the task with the deadline will
automatically show up as “critical” in the Gantt chart. This can be useful for understanding when key tasks hit trigger target
completion dates.
103
Note: Different software tools may have different constraints or even different terminology to describe
constraints. Other tools have additional constraints such as Zero Total Float and Zero Free Float. While
these constraints may be necessary to reflect an actual work situation, they are the exception and not
the rule.
In summary, constraint use other than ASAP should be considered only when necessary to accurately
reflect the plan. When used, careful consideration should be given to which constraint type to apply.
The type of constraint will dictate the impact on float (slack) calculations for the task in question and
other tasks logically linked as successors. Furthermore, depending on the use, the proper calculation of
critical path may be hindered. If using leads or lags, or constraints other than ASAP, it is a
recommended practice to provide justification in the task notes or in a separately identified field within
the IMS.
104
working time, such as weekends or holidays, per the P/p’s assigned calendar. In general, “days” is the
preferred time unit.50
If a P/p is resource loading the schedule, it may be necessary to use resource calendars, which specify
valid time units that a resource may be available to do work. Both resource and P/p calendars should be
used appropriately and be a key consideration when estimating task durations. When tracking costs
and/or EVM performance within the scheduling tool, it is a recommended practice that the P/p calendar
also be consistent with the accounting calendar to ensure accurate cost data. The P/S should be
cognizant of the impact on task scheduling and schedule analysis when both types of calendars apply.
Specific task and resource calendars should be established during initial schedule development.
Any of these customized calendars may be used throughout the P/p lifecycle, or intermittently, as
necessary. It is a recommended practice that customized calendars be clearly labeled and documented
with rationale as to any distinctions from the standard P/p schedule. Figure 5-24 is an example of a
work calendar extracted from MS Project.
50 GAO-16-89G. GAO Schedule Assessment Guide. December 2015. Page 65. https://www.gao.gov/assets/680/674404.pdf
105
Figure 5-24. Example of a work calendar extracted from MS Project.
Note: MS Project makes use of “elapsed days” or “edays”. When an elapsed duration is entered for an
activity, MS Project calculates the activity duration according to calendar days and ignores all non-
working time, such as weekends or holidays, per the P/p’s assigned calendar.
• Established Standards. Well established, historically validated and recorded durations for
routine or procedurally-based activities or operations, such as hourly or daily rates per required
quantity.
• Brainstorming. P/p team members approximate durations based on a combination of factors
(e.g., expert judgment, prior experience, and historic actuals).
• Subject Matter Expert Experience and Judgment. Time estimates based on personal knowledge
and/or experience with the same personnel, or from similar P/p work or specialized training,
often guided by historic (actual) data. In cases wherein verified data contradicts expert opinion,
the burden of proof is on the expert to justify his or her judgment in light of the data.
• Analogy. Actual duration from similar activities (i.e., technical content) used as the basis for the
new activity duration, often adjusted for differences in complexity. Careful consideration should
be given to selecting the analogies. While the P/p can use any previous P/p as a relevant
analogy, the NASA OCFO maintains the SMART tool, which uses a parametric approach, to assist
in making this comparison for unmanned space flight missions. If the P/p is interested in a more
granular comparison other databases, such as the CADRe or the NASA Schedule Repository may
provide lower-level details. Furthermore, there may be generally accepted datasets for
particular elements of the P/p schedule that can be referenced for comparison.
• Parametric Analysis and Schedule Estimating Relationships (SERs). Calculated time estimates
derived from a mathematical relationship that defines schedule as a function of one or more
parameters for factors, which may include technical parameters (e.g., weight, power, mass, etc.)
as well as parameters for cost. Estimating schedule durations using a parametric approach
involves the same fundamentals as estimating cost. The quantifiable relationships between the
schedule and other P/p factors and influences can be captured as Schedule Estimating
Relationships (SERs) and then used to estimate durations of schedule events, much like Cost
Estimating Relationships (CERs) are used to estimate a particular price or cost. SERs are used to
51 GAO-16-89G. GAO Schedule Assessment Guide. December 2015. Page 65. https://www.gao.gov/assets/680/674404.pdf
107
estimate schedule duration by connecting an established relationship with one or more
independent variables to the duration time of an event.
Schedule duration data and independent variables are collected to conduct data analysis and
determine if there are statistically significant relationships present to produce an SER. If an
independent variable (driver) demonstrates a measurable relationship with schedule duration,
an SER can be developed. SERs can contain many of the same independent variables as Cost
Estimating Relationships (CERs) but could also be based on different datasets, normalization
techniques, or analysis methods. While relatively simple in concept, the ability to get accurate
and meaningful data that can be used to quantify a relationship between an independent
variable and schedule duration can be difficult.
Because of the fundamental relationship between P/p schedule behavior and cost behavior,
NASA offers a variety of cost-based databases, models, and tools, which contain schedule data
and/or SERs that can be used effectively for estimating schedule durations. Cost-based
resources are often a good source for parametric estimating of schedule durations, since the
IMS needs to correspond to cost estimates to ensure that enough resources can be applied to
activities to complete them within the expected duration. This should be done before the IMS is
baselined so that the relation between accurate cost and schedule estimates can be verified.52
• Extrapolations. Predicted time estimates calculated from existing known data relationships
and/or trends (e.g., 3-point time estimates).
• Build-Up/Bottom-Up/Grassroots. Decomposition of activities into lower-level tasks which are
estimated and then aggregated at higher levels. Using a detailed engineering build-up estimate
to develop a schedule estimate is a common technique. A highly detailed and logically linked
schedule is the standard product generated by this schedule estimation method. Grassroots
estimating for schedule requires strong attention to detail to be successful. Schedule Analysts
should continue to be careful when differentiating between a build-up schedule estimate and a
given detailed schedule plan. Both may employ the engineering build-up/grassroots approach;
however, there are significant differences. The former reflects an attempt to capture the entire
work effort to analyze durations and the program plan. A build-up schedule estimate, similar to
cost, is an attempt to predict the actual (i.e., actual duration/actual finish date.) The latter
reflects the result of a detailed P/p plan and may contain significant constraints, optimism, or
undocumented assumptions. The plan duration and plan finish date from a given detailed
schedule plan are attempts to organize future work with the goal of delivering on time.
• Performance-Based. Typically used for replanning or rebaselining purposes, actual P/p
performance such as task duration growth or milestone burndown can be used to estimate
schedule elements, via extrapolation or related techniques.
52Additional information the relationship between cost and schedule can be found in NASA Cost Estimating Handbook, Version
4.0. February 27, 2015. Appendix K. Pages K-1 and K-2.
108
5.5.9.3.2 Schedule Estimating Databases, Models, and Tools
The following sections describe the available databases, models, and tools used by the NASA Schedule
Management community for schedule estimating.
Schedule Repository
In July 2019, NASA began an initiative to collect P/p schedules – IMSs in their native format files – in a
Schedule Repository.53 The purpose of the Schedule Repository is to formally archive P/p schedule data
on a regular cadence. The Schedule Repository also serves a useful database containing planned versus
actual schedule information over time. Once a P/p is complete, IMS files are made available to the
SCoPe for use in future P/p Schedule Development, including establishing activity duration estimates
and logic flows, as well as in Schedule Assessment, including performing comparisons of an IMS to
analogous P/p IMSs.
Cost Analysis Data Requirement (CADRe) and the One NASA Cost Engineering (ONCE) Database
CADRe provides a common description of P/ps at a given point in time. CADRe is a formally required,
three-part document that describes the programmatic, technical, LCC, and cost- and schedule-risk
information of a P/p at each LCR milestone (SRR, PDR, CDR, SIR, Launch, End of Mission). 54 Both cost
and schedule analysts can develop better estimates of future P/ps by using CADRe to pull historical
records of cost, schedule, and technical attributes for analogous P/ps. CADRe data is also used to
populate the Master List of P/p Schedules and can help analysts generate a variety of Schedule Outputs,
as shown in Figure 5-25.55
53 Agency Policy Guidance to Enhance Earned Value Management (EVM) and Create a Schedule Repository. June 4, 2019.
https://community.max.gov/display/NASA/Schedule+Community+of+Practice
54 CADRe/ONCE – Data Collection and Database.
https://www.nasa.gov/offices/ocfo/functions/models_tools/CADRe_ONCE.html
55 NASA’s Master List of Project Schedules is maintained by OCFO’s Strategic Investment Division and can be requested through
SCoPe, [email protected].
109
Figure 5-25. Examples of reports that can be manually generated from CADRe data.
Automated search and query of CADRe information is available via the One NASA Cost Engineering
(ONCE) Database.56 ONCE is a web-based database that provides controlled access to the CADRe data
and information. The data stored in ONCE mimics the CADRe templates - Parts A, B, and C. Since
CADRes represent snapshots of a P/p at successive key milestones, the ONCE Database captures all the
changes that occurred to previous P/ps and their associated cost and schedule impacts.57
56 ONCE. https://oncedata.hq.nasa.gov
57 One NASA Cost Engineering (ONCE) Database. https://oncedata.hq.nasa.gov/
58 Schedule Management and Relationship Tool (SMART) can be accessed on the ONCE Database,
https://oncedata.hq.nasa.gov. The creation of SMART resulted from an Office of Evaluation (OoE) research study in 2014.
Analysts with questions about using SMART for SER capability should refer to
www.nasa.gov/offices/ocfo/functions/models_tools/smart.
110
SMART produces a cumulative distribution function (CDF) that reflects the durations of the analogous
missions. It also illustrates the confidence level of the P/p’s estimate for further comparison. SMART
incorporates NASA Schedule Estimating Relationships (SERs) for another point of comparison. Unlike
other third party SERs, those within SMART are derived from a strictly NASA population for a more
applicable assessment and comparison. The SMART can also help identify which spacecraft parameters
are contributing factors to longer durations. SMART SER’s schedule drivers (parameters which are highly
correlated with schedule) include: mass, power, mission design life, year of development, number of
instruments, mission class, and maximum data rate. SER’s are developed for development life cycle as
well as intermediate milestones (e.g. SRR to PDR). Figure 5-26 shows an example of SMART inputs and
outputs.
111
NASA Instrument Cost Model (NICM)
Another NASA modeling tool that contains schedule data and SERs is NICM.59 NICM, which is available
via the ONCE Model Portal, focuses specifically on instrument estimation and contains a large database
of many different types of instrumentation.60 This database includes schedule data, and there is a
component within NICM for estimating schedule duration using SERs. The NICM approach to calculating
duration from SERs is unique in that cost is an input to the SER equation. In this way, NICM SERs
establish a functional link between the calculated cost of an instrument and its schedule duration. In
addition to utilizing Cost As an Independent Variable (CAIV), NICM relies on the mission type and
instrument subtype in the SER equation. Figure 5-27 shows the NICM model schedule equations.
Additional information on the instruments used in each SER can be found in the NICM User Guide.61
59 Analysts with questions about using the NICM for SER capability should contact Joe Mrozinski of the NICM development team
at the Jet Propulsion Laboratory (JPL), at [email protected].
60 NICM can be accessed on the ONCE Model Portal, https://oncedata.hq.nasa.gov.
61 “982-0000 Rev. 8. NASA Instrument Cost Model (NICM) Version VIIIc.” July 2018. Jet Propulsion Laboratory, California
It is also helpful to know what labor resource skills are available and the experience levels of those skills
to be assigned. An inexperienced technician or crew, for example, may take longer to perform the task
than an experienced technician or crew. While equipment resources are reusable, they may not always
be available during the time needed. Consumable resources must be closely monitored and replenished
as needed to support schedule needs. All of these factors may not be known at the time of making the
initial duration estimate for a task, but they are all considerations that may be used to later adjust a
duration estimate, once their impact is known. In addition, labor and financial reports, reflecting actual
hours and dollars from prior periods or previous P/ps, may also provide helpful information for
estimating durations. These reports provide historical data that can be used for both initial and
replanning efforts which involve work scope that is similar to previous activities or past P/ps. The P/S
must constantly be vigilant in establishing and maintaining a P/p schedule that is current and accurate to
help mitigate resource problems.
Figure 5-28 shows the durations loaded in the schedule after all attributes related to estimating the
durations have been considered.
113
5.5.10 Identify the Critical Path(s)
It is a best practice for the P/p’s critical path(s) to be clearly identifiable within the schedule
throughout the P/p life cycle. In CPM scheduling, the critical path is generally defined as follows:
• Critical Path. A sequential path of activities in a network logic schedule that represents the
longest overall duration from the status date through P/p completion, which determines the
shortest possible P/p duration (i.e., least amount of float) and earliest possible P/p finish date.
Any slippage of the tasks in the critical path will increase the P/p duration and slip the P/p finish
date.
CPM is also used to determine the amount of flexibility, or float, in each of the logic network paths.
There are two types of float (slack) common to most scheduling tools:
• Total Float. The amount of time that a task or milestone can slip before it becomes part of the
critical path. Total Float directly relates each task to the P/p end date.
• Free Float. The amount of time a task or milestone may move into the future from its early
finish date before affecting its immediate successor task(s).
Given the basic principle that each activity will finish before its successor begins, CPM calculates the
longest path of planned activities to logical end points or to the end of the P/p, and the earliest and
latest that each activity can start and finish without making the P/p longer. The calculations are done by
way of a “forward pass” and “backward pass” without regard for resource requirements/constraints.
This automated process determines which activities are "critical" (i.e., on the longest path, usually a
zero-float path) and which have "total float" (i.e., can be delayed without making the P/p longer).
It is important to note that if the terminal milestone of the P/p has a hard constraint date assigned to it,
then the critical path could have a positive or negative total float value instead of zero. Figure 5-29
shows an example of a P/p’s critical path calculated from total float.
114
Figure 5-29. An example showing the critical path based upon total float (i.e., slack).
It is important to note that unless the IMS represents the entire scope of effort and the effort is
correctly sequenced through the logic network, the scheduling software will report an incorrect or
invalid critical path (i.e., the critical path will not represent the activities affecting the P/p finish date).62
Accurate float values can only be determined if a complete and valid network logic is in place. Figure
5-29 shows an example project schedule that shows the critical path based on the total float calculation.
For most NASA Space Flight P/p’s, “P/p completion” may mean “launch” in the context of calculating the
critical path. For Research and Technology P/ps, it may mean the delivery of an end-item, such as a
technology demonstration or an analysis report.
There is an important difference between critical path activities and “critical activities” as potentially
characterized by P/p management. In strict scheduling terms, the critical path is the sequence of
activities that are tied together with network logic that have the longest overall duration through P/p
completion, whereas a “critical activity” is often treated as a task which has been subjectively deemed
important enough to have this distinction assigned to it. For example, KDPs, the development of a
primary system component, important tests, or other high-risk technical activities may be considered
“critical activities.” However, these activities are not always on the P/p’s critical path as calculated by
It’s a recommended practice that the P/p identify near-critical and driving paths throughout the P/p
lifecycle. A P/S can identify these near-critical or driving paths by isolating the sequences of activities
that have less than some minimum threshold value of total float, as determined by the P/p management
The number of paths identified usually depends on how “near critical” each path is; although, it is
common practice to track the primary, secondary, and tertiary critical paths, at a minimum.
In order to establish and allocate margin, it is important to understand what margin is and how it differs
from other schedule resiliency approaches, such as the use of contingency and float. Within the context
of Schedule Management, the following definitions for schedule resiliency are used:
• Schedule Margin. Schedule margin is a separately planned quantity of time (working days)
above the planned work duration estimate to be used specifically to address/absorb the impacts
due to risks and uncertainties.65 It is a risk-informed duration that is included as “activities” in
the schedule prior to baselining. Margin is intentionally loaded in the IMS just like any other
63 “Concurrently Verifying and Validating the Critical Path and Margin Allocation Using Probabilistic Analysis.” Joint Space Cost
Council (JSCC) Scheduler’s Forum. Page 3. March 2017.
64 GAO-16-89G. GAO Schedule Assessment Guide. Page 75. December 2015. http://www.gao.gov/assets/680/674404.pdf
65 NPR 7120.5E, page 56, defines Margin as, “The allowances carried in budget, projected schedules, and technical performance
parameters (e.g., weight, power, or memory) to account for uncertainties and risks. Margins are allocated in the formulation
process, based on assessments of risks, and are typically consumed as the program/project proceeds through the life cycle.”
116
activity; however, these activities do not have any defined scope, nor do they have any
associated budget.
Note: Some organizations may use terms such as “reserves” or “integrated returns” to describe
schedule margin, however the NASA preferred terminology is “margin.” Reserve is often used
to refer to forms of funding (e.g., Management Reserve)66; however, NPR 7120.5 specifically
states that “reserve” is an obsolete term and makes references instead to “schedule margin” for
schedule and “UFE” for funding.
• Contingency. Within the context of Schedule Management, Contingency refers to non-working
days or times in the schedule (such as holidays, weekends, or extra shifts) that could be used to
overcome performance delays.
Note: Contingency is not to be confused with margin (i.e., working-days) that are intended for
use to overcome uncertainties and risks.67
• Float or Slack. In general terms, float is the number of workdays that an activity can be delayed
without impacting the start of a later activity. Float is an automatic calculation performed by
the scheduling tool using CPM scheduling. It is calculated by subtracting early dates from late
dates (i.e., Float = Late Dates - Early Dates). A CPM schedule critical path is typically
characterized as the path with the least amount of total float. Calculating the float in the
schedule is particularly important for the space community because spaceflight missions are
often constrained by launch dates, which limits the amount of available time in the schedule
and makes the flexibility to revise various workflows more important with respect to managing
risks. Float informs management as to which activities can be reassigned resources in order to
mitigate slips in other activities. Because float is a calculated value, it can be either positive or
negative, but the intent is to plan the P/p work such that the schedule has either positive or zero
float on the critical path. In general, negative float arises when an activity’s completion date, or
associated milestone, is constrained—that is, when the constraint date is earlier than an
activity’s calculated late finish. In essence, the constraint states that an activity must finish
before the date the activity is able to finish as calculated by network logic. Date constraints
causing negative float need to be justified or removed. Zero float is an indication that an activity
delay of a given number of days will result in a P/p delay of the same number of days. There are
two types of float: Free Float and Total Float.
• Free Float (Free Slack). Free float refers to the amount of time a task can be delayed before
impacting the early start date of its immediate successor(s). Zero Free Float is typically used to
model “just-in-time” deliveries by making early dates equal to late dates (i.e., scheduling
activities according to the late date), forcing the schedule to become equal to the “late
66 Per NPR 7120.5E and the NASA Space Flight Program and Project Management Handbook, SP-2104-3705 “reserves” is an
obsolete term that has been replaced by “Unallocated Future Expenses (UFE)”. However, a more general use of “reserves”
tends to appear in terms of “cost reserves” held by the CAMs or PM on a given P/p and does not necessarily refer to Mission
Directorate-held UFE. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150000400.pdf
67 Whereas NASA differentiates between “margin” and “contingency”, GAO uses the terms interchangeably with respect
schedule margin.
117
schedule”. Zero Free Float can make an activity critical if the Free Float and Total Float are equal
(i.e., applying Zero Free Float constraints consumes all activities free float as well as all activities
total float, making them all critical).
• Total Float (Total Slack). Total Float is the amount of time an activity can be delayed before
impacting the overall P/p completion (i.e., P/p critical path, finish date or end-item date). Total
Float directly relates each task to the P/p finish or end-item date. Positive Total Float is an
indication that an activity can be delayed without affecting the P/p completion. Negative Total
Float is an indication that an activity will impact the P/p completion unless the time is
recovered. Zero Total Float is an indication that an activity delay of a given number of days will
result in a P/p delay of the same number of days.
It is important that the appropriate terms be used consistently across all P/p elements, such that
assessment of the P/p’s flexibility and ability to overcome or mitigate uncertainties and risks is easily
identified and well understood. Whereas float is directly related to the logical sequencing of activities,
margin is established through an understanding an analysis of P/p uncertainties and risks.
Section 7.3.2.1.4 discusses the processes for the management of schedule margin.
From (Point in Life Cycle) To (Point in Life Cycle) Amount of Planned Margin
Confirmation Review Beginning of Integration & Varies: 1-2 month of schedule margin per year
Test
Start of Integration & Test Shipment to Launch Site Varies: 2-2.5 months of schedule margin per year
Delivery to Launch Site Launch Varies: 1 day per week, 1 week per month, 1
month per year
Figure 5-30. Established standards for margin allocation.
These margin guidelines are suitable for early P/p planning; however, margin task durations can also be
established in a way that corresponds to the likely impact of the P/p’s associated uncertainty and risk
118
based on the results of a probabilistic analysis. As early in the P/p life cycle as possible, margin
durations should be demonstrated to be adequate using some form of risk analysis (see Best Practice in
Section 6.3.2.5.3.6). Per NPR 7120.5, “margins are allocated in the Formulation process, based on
assessments of risks, and are typically consumed as the P/p proceeds through the life cycle.”68 Figure
5-31 shows the NPR 7120.5 requirements for risk-informed schedules. Furthermore, the NASA Cost
Estimating Handbook characterizes a risk-informed schedule as having discrete risks and uncertainties
accounted for within the schedule.69
Formulation Implementation
KDP 0 KDP I KDP n
Schedule Baseline
Loosely-Coupled
Uncoupled and
Documented
Documented
Figure 5-31. The requirements for risk assessments and risk analysis by project phase are listed in this figure.
Early in the P/p life cycle, it is a recommended practice to perform a probabilistic schedule risk analysis
to inform the adequacy and placement of schedule margin in the IMS. The amount of margin
incorporated in the schedule should be enough to accommodate all identified duration uncertainties
and discrete risks. Developing informed duration uncertainties requires that the P/p have a clear
understanding of whether its task duration estimates are accurate, inflated, or overly optimistic. It is
also important for the P/p to be aware that unexpected events may occur, which may cause the
schedule to slip (e.g. extreme inclement weather, government shutdown, etc.). Discrete risks are
identified through the P/p’s Risk Management process, the likelihood and impacts of which should be
68 NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020. Page 56.
69 NASA Cost Estimating Handbook, V4.0. Appendix J. Page J-7. February 2015.
https://www.nasa.gov/sites/default/files/files/CEH_Appj.pdf
119
discussed among the PM, Technical Lead, Risk Manager, Business Manager, and Scheduler. Considering
the results of an SRA that incorporates both uncertainties and risks allows for increased accuracy in
downstream forecasts. In addition, NPR 7120.5 requires a risk-informed schedule at the project level at
KDP A, a risk-informed schedule at the system level at SRR, and a risk-informed schedule at the
subsystem level at SDR, which are typically achieved through an SRA. Details on performing an SRA,
including determining the uncertainty and discrete risk parameters to include in the SRA Model, are
contained in Section 6.3.2.5.3.
An ICSRA/JCL is required at KDP I/KDP C just prior to setting the P/p baseline, as well as at rebaselines
for tightly coupled programs, single-project programs, or projects with an estimated LCC greater than
$250 million. For single-project programs and projects with an LCC of $1 billion or more, an ICSRA/JCL is
required at KDP B, CDR, and KDP D.70 During Implementation, it is important to take P/p float into
consideration when establishing any new margin activities due to a better understanding of
uncertainties or the identification of new risks. For instance, it is likely that the P/p has little flexibility in
changing its finish date or end-item delivery date. It is also possible that the P/p has already identified
margin in an amount equal to the available total float during the initial Schedule Development process.
If not, the P/p may opt to take the remaining amount of days between the calculated early finish date
and the P/p’s finish date (which should be represented by total float) and add a margin activity that
applies an equivalent duration in the schedule to absorb any remaining float. This will make the primary
critical path a zero-float path and maximize the amount of margin the P/p has available to manage
identified risks and uncertainties. If more margin is allocated than total float in the schedule, the float
on critical path activities will become negative, indicating that the P/p requires time beyond its
scheduled completion (see also Section 1.1.1.3, At the End of the Schedule Logic Flow).
This handbook places emphasis on identifying and managing schedule margin over float. With margin, it
is possible to determine whether a P/p has “adequate” margin due to its direct relationship to
uncertainties and risks. Managing margin allows for direct traceability to potential uncertainty and/or
risk impacts. Because the allocation of margin requires available float in the deterministic schedule, by
managing schedule margin, the float is often managed by default. Guidance on margin and float erosion
tracking can be found in Section 7.3.3.1.6.
70 Memo from the NASA Associate Administrator: “Joint Cost and Schedule (JCL) Requirements Updates.” May 24. 2019
120
completion. A driving path is based on the zero free float identifying the drivers to any activity, rather
than the effect on total float to P/p completion. Margin activities can be captured on these paths as
well for better management insight and schedule control; however, margin calculations against the
overall P/p finish date or end-item delivery date should be derived from the P/p’s primary critical path
for reporting purposes.
In general, for Space Flight P/ps, margin can be assigned to systems and subsystems during design and
fabrication and placed just before delivery into I&T in order to protect the start of I&T. After I&T start,
additional margin tasks are assigned throughout the I&T flow and are usually placed just before major
test activities to protect time slots in the test facilities. At the end of I&T, another margin task is usually
assigned to protect the ship date to the launch integration activity. Of course, there are variations on
this approach depending on the actual P/p Planning (e.g., performing an SRA to establish risk-informed
margin locations as described in Section 6.3.2.5.3.6).
Figure 5-32 is an example of margin allocation for a project early in Formulation using established
standards. The margin from the subsystems into the Integration and Test (I&T) as shown here is a single
margin allocation.
121
“risky” areas of the schedule. It is important to note that schedule margin durations should not be used
to hold a deliverable forecast to a static date but should be based upon risks and uncertainties.
It is possible that budget needed to support the new scope will either come from either P/p-held
Management Reserve (MR) or NASA HQ-held Unallocated Future Expenses (UFE).72 Because UFE is
generally risk-informed as part of a risk analysis exercise to set the Agency Baseline Commitment
(ABC)73, it is likely that UFE will exist in an amount to cover the projected risks that may affect the
schedule.74
It is a recommended practice that no specified budget be assigned to a schedule margin activity as part
of the baseline. Instead, budget available for “planned” schedule margin activities should be maintained
outside of the baseline. As part of the SRA activity, this may be handled by creating a discrete risk in the
71 “Concurrently Verifying and Validating the Critical Path and Margin Allocation Using Probabilistic Analysis.” Joint Space Cost
Council (JSCC) Scheduler’s Forum. Page 16. March 2017.
72 Per the NASA EVM Implementation Handbook (NASA/SP-2018-599), ANSI/EIA-748 provides for the establishment and use of
requirements, cost, schedule, technical content, and an agreed-to JCL that forms the basis for NASA's commitment to the
external entities of OMB and Congress. Only one official baseline exists for a NASA program or project, and it is the Agency
Baseline Commitment.”
74 “The level of UFE or UFE percentage should be selected based upon achieving a particular level of confidence from the cost or
joint cost and schedule risk analysis. The appropriate level of confidence is chosen by the appropriate NASA management
council after the analysis, and the resulting UFE should be identified as the recommended level at all Confirmation Reviews.”
NASA Cost Estimating Handbook, V4.0. Page 28. February 2015.
https://www.nasa.gov/sites/default/files/files/01_CEH_Main_Body_02_27_15.pdf
122
risk list, mapping it to a margin activity assigned at the appropriate point in the schedule, and then
putting a lien against the MR or UFE until the margin is “used” – that is, converted to an activity with
defined scope. Once management confirms that margin needs to be used for risks or uncertainties,
activities with both scope and budget should be added to the baseline through the appropriate change
control process. The original margin activity durations should be reduced accordingly.
Caution. Some organizations may choose to allocate budget to margin activities (i.e., “funded schedule
margin”). This is not a best practice as it effectively inflates the P/p baseline and complicates EVM
calculations by making metrics appear as though less work has been accomplished overall per the
inflated baseline.
It is necessary to understand margin in the context of the IMS’s networked elements that affect it, so a
Schedule Risk Analysis-based Assessment should be performed after margin is allocated to more
completely understand the findings associated with all lower-tier assessments.
The following sections further describe resource loading and cost loading. These strategies should not
be viewed as conflicting approaches, but rather as two different methods of applying “resources” to the
IMS to satisfy different P/p management needs and purposes. While it is important to introduce and
distinguish the differences between resource loading and cost loading techniques, specific guidance and
information on this second approach is addressed in greater detail in Sections 6.3.2.2.1.3 and 6.3.2.2.3,
as well as within other Agency JCL instructional documentation. It should be noted that although
“resource loading” is an Agency requirement for those P/p that have a JCL requirement, cost loading is
an acceptable method.
123
Note: An additional section on budget loading follows the resource loading and cost loading sections;
however, budget loading is not a formally defined method within the Agency.
Resource loading provides many additional benefits that greatly enhance the P/p planning and control
process including, but not limited to:
• Ensuring accurate integration of work and budget plans (e.g., to support EVM), thereby
increasing confidence and reducing risk
• Ensuring resource availability to perform the work (e.g., labor, procurement, etc.)
• Providing greater insight into workforce adequacy and allocations
• Generating accurate inputs for the Agency Program Planning Budget Execution (PPBE) process
• Providing cash flow and budget profiles
• Providing quicker, more effective analysis for “what-if” exercises
There are however, cautions associated with resource loading which may indicate that it should not be
done, such as:
• Insufficient team resource loading skills for effective implementation and maintenance
• Scheduling tools inadequate to perform resource loading
• An undefined P/p resource pool
• Lack of clarity in contractor or subcontractor allocation of specific resources to specific portions
of the schedule (due to proprietary corporate information)
• A P/p team culture resistant to its use
124
Figure 5-33. An example of a resource-loaded schedule.
Resource loading can be done in an automated scheduling tool or in an external spreadsheet; however,
to ensure adequate cost and schedule integration, it is a recommended practice to implement resource
loading within an automated scheduling tool. Some standard practices for implementing resource
loading within an automated scheduling tool include, but are not limited to, the following:
• Prior to assigning resources to IMS tasks, it is a recommended practice that a listing of potential
resources be established within the automated schedule tool (see Figure 5-5). This resource
“library” or “pool” should contain all types of resources that will be needed for the P/p,
regardless if it is workforce, equipment, or consumables.
• It is a recommended practice that the resource pool uses a consistent resource naming
convention. This will enhance accuracy and consistency in the planning, integration, analysis,
and reporting of P/p resource data. Due to the size of P/ps and the need for flexibility to allow
multiple people within a single organization to work specific tasks, P/ps may opt to use job
125
titles/levels (e.g., Engineer, Sr. Engineer, etc.) versus individual names of personnel in the
resource pool.
Once the resource pool is complete, then task resource assignments can be made.
Within the resource pool contained in the automated scheduling tool, there are also specific data
elements as shown in Figure 5-35 that must be associated with each resource that are critical to
accomplishing effective resource loading. It is a recommended practice that these resource data
elements include, but are not limited to, the following (* indicates recommended minimum):
• *Resource name (specific employee names are not recommended due to dynamic work
assignment changes); add new resource names as-needed
126
• *Resource description (e.g., organization name, support contractor company name)
➢ Resource Types (e.g., workforce, material, or consumables)
• *Element of cost:
➢ *Travel (designator = Travel)
➢ *Personnel Cost (designator = CS)
➢ *Other Direct Cost (designator = ODC)
➢ *Support Contractor (designator = SUP)
➢ Equipment (designator = EQP)
➢ Contracts (designator = CON)
➢ Material (designator = MAT)
➢ Overhead and G&A (designator = OGA)
• *Center identifier (use official Center acronym)
• *Maximum number of units available Standard Unit Rate (P/p to determine)
• *Overtime Rate (P/p to determine)
• *Cost Per Use (P/p to determine)
• *Accrual method (start, prorated, end)
• *Resource Calendar (reflects active periods of resource availability - P/p to determine)
127
Allocation of resources may be done in various units of measure, depending on the type of resources
used. The P/S must also ensure that resources are distributed adequately across the specified task
durations. Most automated scheduling tools distribute assigned resources in a linear fashion, evenly
across the duration of a task, unless the user takes action to customize this distribution. To ensure that
a reasonable and achievable schedule plan has been developed, it is important that the P/S diligently
work through the resource loading process and establish a complete and credible basis from which to
move forward during P/p implementation. Figure 5-36 illustrates the schedule resource loading process.
Caution. It is important to note that automated resource leveling does not factor in the varying skillsets
and availability of specific resources (people).
Examples that illustrate the how a schedule is resource leveled are provided in Figure 5-37 and Figure
5-38. See also Section 5.5.13 – Time-Phase the Schedule.
129
Figure 5-37. An example of a resource-loaded IMS with resource conflicts.
Figure 5-38. An example a resource loaded IMS with leveling to resolve resource conflicts.
130
It is very important that the resulting schedule data be reviewed by the P/p management team carefully,
and not just taken at face value. This ensures credibility for P/p implementation as well as alleviates any
other concerns that may exist relative to schedule data that are not necessarily related to float (slack),
logic, or constraints that may require adjustments to be made before baselining the schedule.
It is a recommended practice that resource leveling be performed to accurately validate whether the
scheduled P/p completion is achievable with the allotted resources (e.g., facilities, people, and budget)
based on their availability. To baseline a P/p schedule without first resource loading and conducting
leveling analysis is to assume a significant risk in achieving P/p completion within budget and on
schedule. The P/p should also take this into consideration in terms of activity duration uncertainty when
performing an SRA to analyze the achievability of the schedule.
5.5.12.2 Cost Loading
Cost loading is a cost and schedule integration approach defined in Appendix J of the NASA Cost
Estimation Handbook (J.1.6.3).75 This approach utilizes dollars as the only resource and involves the
loading of projected costs (i.e., cost estimates, not to be confused with budget) to associated tasks
within the IMS (or an Analysis Schedule, if appropriate for performing an SRA/ICSRA). Cost loading is
often viewed as less cumbersome than traditional resource loading. Recall that resource loading has
several “cautions” identified for when it should not be implemented. P/ps may also find it difficult to
properly setup and implement resource loading due to the complexity of the P/p or lack of insight to
contractor schedules that may make up a large portion of the P/p schedule. When resource loading is
deemed impractical, it is a recommended practice to implement cost loading within an automated
scheduling tool.
Per the NASA Cost Estimating Handbook, “the IMS needs to correspond to cost estimates to ensure that
enough resources can be applied to activities to complete them within the expected duration. This
should be done before the P/p schedule is baselined so that the relation between accurate cost and
schedule estimates can be verified.” Thus, cost loading is the time-phased estimate of the cost
generated by the P/p, typically through a grassroots or parametric estimate, where all the ground rules
and assumptions have been captured in a well-documented BoE.
Specifically, cost loading is accomplished by mapping cost estimates to schedule activities. The costs
should be loaded for each task according to how the cost interacts with the schedule activity. To do this,
costs are distinguished by whether they are time dependent (TD) or time independent (TI). TD costs are
a function of activity duration multiplied by the periodic value (burn-rate). If the schedule is longer than
planned, additional costs will be incurred; if the schedule is shorter than planned then less costs will be
incurred. Examples of TD costs include labor (i.e., LOE activities, “marching army”, or full-time
equivalent (FTE)/whole time equivalent (WYE)) costs, typically found in program management, systems
engineering, or safety and mission assurance activities, as well as rent, utilities, facility maintenance,
sustaining operations, or any other costs that are charged by the amount of time they are employed. TI
costs are defined as those that are fixed, irrespective of overall task duration. In other words, cost does
75NASA Cost Estimating Handbook, Version 4.0. February 27, 2015. Appendix J.
https://www.nasa.gov/sites/default/files/files/CEH_Appj.pdf
131
not grow because of an increase in schedule duration. Examples of TI costs include procurements of
components, materials, or even a set service, in addition to tests and other expenses. Figure 5-39 shows
an Excel Workbook of the project CBS broken out by TI and TD costs.
Figure 5-39. Example of a cost loaded schedule with CBS broken out by time-dependent and time-independent costs.
Cost loading the IMS also provides a management tool that enables the P/p team to conduct an ICSRA
(or JCL) assessment. The JCL is a probabilistic assessment that is usually administered prior to key
designated P/p life cycle decision points to inform management regarding the likelihood of
programmatic success. Specifically, a JCL will assess the probability that cost will be equal to or less than
the targeted cost and schedule will be equal to or less than the targeted schedule date. Thus, it is
sufficient to use cost loading for JCL purposes. Cost loading a schedule for the purposes of performing
an ICSRA is further described in Section 6.3.2.4.
Normally, a budget is imposed upon the P/p (i.e., comes from the top down) and is set early on in the
P/p lifecycle when there is a lack of maturity in the requirements or understanding of the constraints.
As a result, the budget numbers are likely underestimated due to the inability to understand the
complete scope of work. In addition, the P/p usually plans to continue for a set duration according to a
set budget (i.e., ABC). The purpose of integrating the “cost” and schedule is to determine whether the
P/p can meet these commitments. Budget loading infers that budgeted dollars are assigned to the
schedule activities. Unfortunately, the budget lacks the fidelity or necessary breakdown to be able to
assign sub-budget elements to schedule activities. Consequently, the concept of budget loading a
schedule offers limited feasibility for establishing and analyzing a P/p plan.
132
Sometimes, the budget and cost estimate are consistent at the beginning of the project, but an
underperforming contractor on a cost-plus-fixed-fee (CPFF) contract causes the costs to quickly exceed
the budget. In these instances, spreading the budget across activities, thereby budget loading the
schedule, underestimates the potential actual costs. A budget may also have political implications and
carry embedded risk. In addition, phasing constraints and funding profiles can impact a P/p’s budget,
which can then become a reduction from the grassroots estimates and/or modeled cost estimate. Given
these factors, the budget may or may not be aligned with the P/p’s cost estimate. An ICSRA performed
on a budget-loaded schedule carries embedded risk, since the budget is generally assumed to be a
reduction of the grassroots or modeled cost estimate. Because the budget does not reflect the
expected costs of the project (e.g., historical resource costs/cost trends, risks and uncertainties that can
cause schedule delays resulting in increased costs, etc.), budget loading may underestimate projected
costs and ultimately not provide a credible confidence level. Thus, budget loading should be avoided.
Document the BoE for Resource or Cost Loading
It is important to document in the BoE any assumptions related to the resource or cost loading of the
schedule, which may include rationale for the level at which costs or resources are loaded. It is also
necessary to reference (and include where possible) any applicable source documentation (e.g., WBS
Dictionary, PPBE, independent cost estimate(s), resource rates, etc.) as part of the BoE.
Marrying the financial and schedule domains together is perhaps the most delicate schedule
development procedure. As such, its applicable assessment activity, the Resource Integration
Assessment procedure, proves invaluable in affirming that this these domains are at least mechanically
aligned.
133
Figure 5-40. The relationship between P/p funding, the P/p budget plan, and the P/p schedule.
There are many situations related to the P/p budget and funding that may dictate when and how quickly
activities can be performed, which directly contribute to how the schedule is time-phased. Still other
situations may cause the P/p to intentionally re-phase the schedule such that those conditions can be
met. Typical situations are as follows:
• Budget Availability. Often within NASA, the annual budgeting process and the most efficient
P/p schedule do not match. The annual budgeting process tends to be a flat line, whereas the
most efficient P/p schedules will follow a ramp-up, peaking before CDR, then ramp-down to a
lower level for I&T and launch vehicle integration. Smoothing the peak funding can be
accomplished by using the SNET constraint or lags to move activities into a subsequent fiscal
year. When perfect leveling cannot be achieved, other techniques may also be utilized such as
planning for budget carryover from underutilized years to over-utilized years. For time periods
that underutilize available budget, schedule compression techniques are an option (See Section
7.3.4.5).
• Continuing Resolution. Continuing Resolution (CR) occurs when Congress is unable to pass the
budget in time for the fiscal year start. The CR process will hold current spend levels flat until
the new FY budget can be passed. CRs have become common recently and can last for several
months or more. It may be worthwhile to consider planning the schedule such that the required
budget will remain flat from October through December.
• Facility Availability. Major facilities such as wind tunnels, vibro-acoustic chambers, and vacuum
chambers are in high demand and may not be available at the time the P/p needs them. In such
cases, using a SNET may be a valid approach.
134
• Human Resources. Staffing and specialty skills may be limited or unavailable when the planned
schedule activities require them. If the schedule is resource loaded, scheduling tools support
resource-leveling which will stretch or move activities to bring the needed level into compliance
with available resources. If the schedule is not resource-loaded, the SNET constraint and
manual stretching the duration can be used to match the activities to available personnel.
Document the BoE for Time Phasing the Schedule to Align with the Availability of Funding
While resource or cost loading the schedule can help to ensure that the cost estimate is aligned with
scheduled activities, it is also important to make sure that the P/p budget, and the availability of that
funding, is adequate to support the time phasing of the activities. It is important to document any
assumptions associated with the available funding as part of the schedule BoE, including any special
circumstances related to how much budget is available and when. Further, like the Schedule Risk
Analysis-based Assessment before it in Section 5.5.10, the Integrated Cost and Schedule Analysis-based
Assessment procedure should be executed after this schedule development step to shed new light on
assessment findings and augment the BoE.
Often the P/p will have identified risk mitigation plans for the actionable risks. Risk Mitigation is an
action or series of actions put into place by P/p management to reduce the likelihood of risk occurrence
or the impact from the risk event. Detailed risk mitigation plans include detailed information as to the
mitigation action(s), timeframe for the mitigation, resources required for mitigation, expected results,
alternatives, and costs of mitigation effort. When such risk mitigation burn-down plans are available,
each mitigation activity should be mapped to the schedule at the appropriate location along with a
justification for the expected risk score and quantification after the mitigation activity is complete. The
level of post-mitigation risk is known as residual risk. Residual risk represents the likelihood of the risk
event and impact of the risk event after mitigation activities are enacted.
Note: An “accepted” risk’s schedule consequences should augment the baseline IMS; likewise, the
accepted risk’s cost consequences should be incorporated into P/p cost estimates and budget plans.
• Schedule BoE. The Schedule BoE is the documentation of the ground rules, assumptions, and
drivers used in developing the cost and schedule estimate, including applicable model inputs,
rationale or justification for analogies, and details supporting cost and schedule estimates.”77
The Schedule BoE dossier acts as a comprehensive, structured collection of technical and
programmatic information necessary to fully develop, understand, assess, analyze, and
theoretically reproduce the IMS, while also playing a supplementary role in schedule
A properly prepared IMS provides a roadmap from which the P/p team can execute all authorized work
and determine where deviations from the baseline plan have created a need for informal changes or
formal corrective actions. Prior to establishing the baseline, the schedule is referred to as the
preliminary schedule or preliminary IMS; once baselined, it is the “schedule baseline” or “baseline IMS”.
The IMS is baselined, usually through an Integrated Baseline Review (IBR), and progress is measured
from this baseline throughout the P/p life cycle. The baseline IMS provides the approved, time-phased
P/p schedule plan of the work to be performed that serves as the basis for performance measurement
during P/p implementation. As performance variances exceed prescribed thresholds, or new content is
added, corrective actions may be taken, or the baseline may be changed through the P/p’s CM/DM
process. The processes for establishing and controlling the schedule baseline are discussed in Chapter 7.
It is a best practice for the schedule to reflect vertical traceability in that any and all supporting
schedules contain consistent information and can be traced to the IMS. The IMS is traceable to the
WBS, the SOW, Contractor Performance Report (CPR), and the EVMS. P/p risks and risk mitigations
should be traceable to the IMS, as applicable. Vertical traceability allows for total schedule integrity and
enables different teams to work to the same schedule expectations. Schedules are typically categorized
according to three levels of detail:
• Summary Level. A Summary Schedule is one-page report that represents a high-level roll up of
the IMS and may be generated for a Program, for individual projects within a Program, or for
sub-projects of a project. It contains key summary activities and milestones depicted in a Gantt
chart, typically at the second-level WBS (e.g., subsystem level); although, the level of the roll-up
depends on the level of detail that will offer the PM or other stakeholders the appropriate level
of insight. The Summary Schedule should clearly identify the critical path at the summary level
and also show any areas of the schedule that contain margin. A Summary Schedule is often
referred to as a “Master Schedule” when it reflects a summary IMS for the complete P/p;
however, a Summary Schedule is not an IMS, and should not be used as a substitution for an IMS
when an IMS is required.
• Intermediate Level. The intermediate schedules are at a lower resolution of work to be
performed than what is depicted in the Summary Schedule, but at a higher level than the
Detailed Schedule(s). Early in the P/p life cycle, intermediate schedules may represent early
stages of the rolling wave approach (i.e., top-down). Later in the life cycle, they may be slightly
137
summarized versions of more detailed schedules (i.e., bottoms-up). Intermediate schedules are
logic network schedules (i.e., CPM schedules) and reflect relationships among key events, start,
finish, and baseline dates for activities, as well as total float for each activity. Intermediate
schedules should be organized according to the WBS and support the key dates in the IMS.
Intermediate schedules should also clearly identify the critical path.
• Detailed Schedules. The detailed schedules are the lowest-level P/p element schedules
available that identify discrete work packages for a specific schedule element, such as a specific
WBS. Detailed schedules illustrate horizontal dependencies and are used to track and control
work progress at the lowest level. Detailed schedules are logic network schedules (e.g., CPM),
and should depict activity logic, start, finish, and baseline dates for detailed activities, as well as
the total float for each detailed activity. Detailed schedules should also reflect any other activity
attributes, such as leads/lags, uncertainty/risks, etc. It is critical that the activities in a detailed
schedule be defined at a low enough level to allow for finish-to-start interdependency
relationships where feasible, accurate progress measurement, issue identification, and
traceability to higher level milestones. In addition, detailed schedules should clearly identify the
critical path(s). If the detailed schedules are resource or cost loaded for the purposes of an
ICSRA/JCL, they may also require traceability to the CBS.
Estimates for all work activities establish a logical hierarchy from the detailed activity level to
intermediate to P/p summary levels, and contain baseline, actual, and forecast dates for each activity.
Thus, the IMS is a hierarchical, tiered network capable of rolling up to high-level summary
representations of activities, as well as breaking down to the lowest level of task details showing
dependencies, resources, durations, and constraints. Using the assigned coding structure, the
scheduling tool is able to filter and summarize schedule data to provide reports at each of these levels.
An activity owner should be able to trace detailed activities to higher-level summary activities within
intermediate- and summary-level schedules. In much the same way, the PM should be able to trace
summary activities down their more detailed components or work packages. As shown in Figure 5-41,
sub-project schedules can be separately maintained by the owning organizations, and as required, be
provided to the P/p on a regular basis with appropriate status/performance data to serve as an update
to the P/p IMS. Even though the sub-project activities may be rolled into a higher-level summary task or
milestone, Technical Leads (e.g., CAMs, WBS Element Owners, etc.) should be able to identify when and
how their activities affect the overall P/p schedule. Any descriptive narrative associated with the
traceability of different levels of the schedule to one another should be captured as part of the BoE.
138
Figure 5-41. All levels of the P/p schedule should be vertically traceable to each other, reflecting consistent schedule data.
The level of insight and analysis that can be achieved from the schedule is heavily dependent on the
level of detail contained in the IMS, which is important to accomplishing the three continuous schedule
management processes: Schedule Assessment and Analysis, Schedule Maintenance and Control, and
Schedule Documentation and Communication. For instance, detailed critical path identification and
analysis, as well as detailed insight into P/p issues cannot be done with only summary-level schedule
content. Instead, the detail in the IMS is sufficient to identify the longest path of activities through the
entire P/p.78 For any reporting requirements that specify an “IMS”, the complete IMS in its native file
format should be delivered (e.g., to the SRB during P/p Life Cycle Reviews, or as part of the NASA
Corrective Action Plan Schedule Repository Initiative).
While each of the three levels of schedule detail likely exist for every element of the P/p (e.g.,
subsystem, instrument, sub-project), oftentimes, what goes into the P/p IMS is a mix of all three levels
of the supporting work elements, depending on the relationships of the individual Element Owners to
the P/p itself. For instance, a P/p IMS may have elements that are at a summary level, such as work
78 GAO-16-89G. GAO Schedule Assessment Guide. December 2015. Page 11. https://www.gao.gov/assets/680/674404.pdf
139
performed by international partners or parts procured from vendors, where only summary tasks and
delivery milestones are available. For contracted portions of the P/p IMS, the schedule may contain
work defined at an intermediate level. Elements of work that are performed in-house may be carried at
a detailed level to facilitate more rigorous Schedule Control.
A Program IMS may consist of one or more project schedules. Within NASA, Research and Technology
Programs (NPR 7120.8) are those which are strictly comprised of R&T projects. Space Flight Programs
(NPR 7120.5) are categorized according to the following four groups:
• Tightly Coupled Programs contain multiple projects that have a high degree of organizational,
programmatic, and technical commonality. This type of Program requires a much higher degree
of integration between the projects potentially resulting in numerous inter-project
interdependencies in the Program IMS.
• Loosely Coupled Programs contain projects that have organizational commonality, but little
programmatic or technical commonality. These projects will typically have minimal or no inter-
project interdependencies in the Program IMS.
• Uncoupled Programs contain projects that are implemented under a broad scientific theme
and/or a common implementation concept, but each project will be independent of other
projects in the Program. These projects will have minimal or no inter-project interdependencies
in the Program IMS.
The level of detail contained in the Program IMS will generally depend on two key factors:
• The level of management insight desired by the Program
• The magnitude of Program scope and the amount of project data to be maintained and analyzed
at the Program level
Note: Potential compatibility issues between Program- and project-level schedule management tools
should be worked out during Planning and should not be a constraint on a Program’s ability to perform
adequate Schedule Management.
Much in the same way a Program schedule may be composed of multiple project schedules, it is not
unusual for a Single-project Program or project IMS to be an integration of several sub-projects.
• Single-Project Programs have only one project that makes up the Program. For this type, the
Program IMS will most likely not have interdependencies to other projects. These Programs
tend to have long development and/or operational lifetimes, represent a large investment of
Agency resources, and have contributions from multiple organizations/agencies. These
Programs frequently combine Program and project management approaches, which they
document through tailoring.
• Projects often have interfaces with other projects, agencies, and/or international partners. In
other cases, a space flight project may have a prime contractor developing the primary science
instrument or spacecraft system. An R&T portfolio project may be made up of one or more
groups of R&T investigations that address the goals and objectives of the R&T portfolio
140
project.79 The sub-projects may be separately managed and maintained by other organizations
or by external contractors. Whether primarily “in-house” or not, the project IMS will most likely
have interdependencies to other organizations.
It is important to note that different organizations may use the term "integrated master schedule" as it
pertains to them. For example, a project IMS may represent a detailed schedule that gets summarized
at an intermediate or even summary level for inclusion in the Program IMS. Similarly, a sub-project
work-package level schedule may be the sub-project’s IMS; however, the same schedule would serve as
a detailed schedule for the “parent” project. When discussing the work of a prime contractor, the term
“IMS” may be used to refer solely to the prime contractor’s schedule. In actual practice, the
government IMS usually incorporates intermediate- or summary-level elements of the contractor's IMS,
whereas the contractor's IMS, at its lowest level, includes the individual activities necessary to complete
each work package. Should the government require more insight into the contractor schedule, it is
possible that a request, by way of contract language, would be made for intermediate schedules or
detailed schedules from the contractor for either specific elements of the contractors work or the entire
contractor “IMS”. In general, there are five techniques for characterizing the P/p IMS as shown in Figure
5-42.
Figure 5-42. A summary of the five different techniques for developing the IMS.
79NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020. Page 11. https://nodis3.gsfc.nasa.gov/npg_img/N_PR_7120_005E_/N_PR_7120_005E_.pdf
141
The five IMS development techniques acknowledge the differences among NASA’s P/p types, acquisition
strategies, external partnering agreements, and other factors. Similarly, the IMS development
techniques have various advantages and disadvantages which are summarized in Figure 5-43.
Figure 5-43. A summary of the advantages and disadvantages of each IMS development technique.
While there are no mandatory rules for matching IMS development with P/p types, Figure 5-44 offers
some general guidance on which IMS technique is suitable for specific P/p types. The five techniques
are further described below.
Figure 5-44. Suggested mapping of each IMS development technique to P/p types.
142
Single Consolidated P/p IMS
With the Single Consolidated P/p IMS, all of the P/p work scope is incorporated into a single IMS
schedule file encompassing NASA in-house, contractor, and partner efforts. If interrelationships exist
between any of the provider schedules, then appropriate logic relationships should be included to
accurately model those interdependencies. Where these interdependencies exist, it is appropriate that
the P/p manage agreements (e.g., MOAs, MOUs, etc.) between the elements.
This technique does not necessarily prescribe that the native IMS files from the providing organizations
are directly integrated into the P/p IMS. For example, in some cases the P/S creates a summary or
intermediate level version of a provider’s schedule that is incorporated into the P/p IMS. However, it is
a recommended practice that the entire scope of work be broken into schedule tasks and milestones at
a consistent level of detail to allow discrete progress measurement and visibility into the overall design,
fabrication, integration, assembly, test, and delivery phases of each end item deliverable. Additionally,
all schedule tasks/milestones should be integrated with the appropriate sequence relationships to
provide a total end-to-end logic network leading to each end-item delivery.
The Single Consolidated P/p IMS should contain all contract and controlled milestones, key
subcontractor milestones, end item delivery dates, key data delivery dates, and key Government
Furnished Property (GFP) need dates. For Programs, all tasks and milestones reflecting effort to be
implemented specifically at the Program level must also be included, as well as all Program-level control
milestones that have been established. The IMS should also contain the appropriate field codes
necessary to provide sort, select, and summarization capabilities for, but not limited to, WBS element,
project phase, and level-of-effort tasks. In-house and contractor schedules supporting the overall IMS
should capture the necessary P/p information according to Agency or P/p required field codes.
Once the integration of all in-house and provider schedules into a Single Consolidated P/p IMS is
complete, the P/S should validate the IMS through Schedule Assessment checks and reviews by the
input stakeholders. Since the IMS serves as the basis for identification of critical paths and driving paths,
as well as work-off and performance trending, schedule risk assessments, and “what-if” analysis, this
strategy provides the overall capability for integrated insight and oversight of all project work.
Caution. Most scheduling software supports the integration of multiple or external schedules, assuming
the schedules are built in the same tool. If using different tools, the schedule data may need to be
exported into a data file and then imported into the consolidated IMS. For on-going schedule
integration, capturing activity and milestone information may require reconciling status dates between
in-house and contractor schedules, as well as a careful consideration of calendars and cost or resource
loading techniques applied to each schedule.
Master P/p IMS (with sub-projects)
The Master P/p IMS technique is similar to the single consolidated P/p IMS technique described above,
except that in this case, a Master IMS file is created that provides the schedule backbone for crosslinking
the interdependencies among the supporting provider organizations’ individual IMS sub-project
schedule files.
143
P/p Control Milestone Integration IMS
With the P/p Control Milestone Integration IMS technique, in-house, contractor, and other partner-
provided IMS files are retained and monitored in their native formats. Horizontal schedule integration is
maintained through the identification and tracking of significant receivable and deliverable milestones
between the individual schedules.
Note: One of the challenges with the P/p Control Milestone Integration IMS technique is that external
stakeholders may feel that a true end-to-end IMS does not exist for the P/p. Using milestone sets to
reflect the major events in accomplishing the complete P/p effort is seldom an effective practice for the
P/p’s insight/oversight purposes, as it is not conducive to sound schedule analysis or meaningful insight
into overall P/p performance. Underlying interdependencies are much more difficult to reflect
accurately when using this technique. This difficulty is due to the technique in which the P/S must
account for the effort being carried out in between the milestones. In order for the P/p IMS to keep the
proper time-phasing for the numerous provider milestones the P/S must either incorporate appropriate
schedule lag values between each milestone (not a best practice) or assign date constraints to each
milestone included in the schedule (also not a best practice). A way to address this concern is for the
P/S to periodically create either a Single Consolidated IMS file or Master P/p IMS file with sub-projects
and directly link the predecessor and successor relationships to validate the logic among P/p elements.
A recommendation is to perform this cross-check schedule integration in advance of major LCRs.
144
more heavily on the detailed schedule insight and analysis provided by the provider organizations, which
is not usually adequate for achieving integrated Program insight.
Prime Contractor-based P/p IMS
With the Prime Contractor-based P/p IMS technique, the uses the prime contractor’s IMS at the P/p
IMS. This technique is used when the P/p’s work essentially consists of overseeing a single prime
contractor whose effort comprises the majority of the P/p work scope.
Caution. In the case of contracted efforts, the P/p needs to ensure that the contractor is required to
deliver a schedule that reflects the level of detail necessary for PM oversight. It is recommended that
Prime Contractor-based IMSs be composed of all supporting P/p schedule information at least at an
intermediate level of detail for the contracted effort. The NASA P/p should also consider maintaining an
“add-on” schedule reflecting effort that falls directly under the responsibility of the NASA P/p Office.
The latter portion will allow for adequate P/p Office oversight and control of the work for which they are
directly responsible.
145
Calendar 2010 2011 2012 2013
Year
2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
ProjectMilestones
MAVEN Milestones Phase Phase C 21.5 Mos Phase D 16 Mos
B 7/12 7/10 SIR 11/18 Launch
Confirmation RvPDR 11/1 C/D ATP 7/15 CDR ATLO Start 8/13 Ship to KSC 8/6
10/8
6/25 Complete S/S PDRs Final ICDs/Fault Trees Final LV-S/C Dyn
Systems Eng. WBS – Reference Model
Mission Sun Sensor Star Tracker
6.2.02 Avail
Database
Reaction Wheel Assy Avail
Initial SW Algorithm Alg Update Parameters IMU
GN&C WBS – 6.2.04
Therm. Memos RS Thermal Blank P&F Thermal Blank MLI Build 1 MLI Build 2
Thermal WBS – 6.2.05
Static Test EDU Gimbal FL 2-Axis Gimbal
Test PAF Lifetest Compl Wing #1 Wing #2
Mechanical WBS – 6.2.06
PMD Compl Fuel Tank Structure to Prop Integ. Str./Prop. Module to
Tank Forgings Avail Avail ATLO
Proof Test
Propulsion WBS – 6.2.07
PAPU Avail
PDDU Avail Flt Batteries Avail
Power WBS – 6.2.08
DTCI-U IRAD Card Avail Rad 750 FL C&DH Boxes Avail
C&DH WBS – 6.2.09 C&DH EDU #1 Avail
Units #1 #2
Figure 5-45. A typical Summary Schedule “output” from the Schedule Database is shown here. It includes the primary,
secondary and tertiary critical paths.
While a Summary Schedule is often used for high-level schedule reporting, a Summary Schedule should
never be used as a substitute for the P/p IMS, when an IMS is explicitly required. This is in direct
contravention of the definition of an IMS.
• Placeholder Activity. Often placeholder activities or links need to be added to the Analysis
Schedule network (or IMS if being used for the SRA/ICSRA) to cause it to behave as it should
146
under the influence of risk and uncertainty. For example, assume the natural design of the
schedule under normal circumstances will not allow two units to enter a test facility at the same
time. Under risk and uncertainty, it may be possible for two units to arrive at the same time,
thus necessitating the need for an extra link in the network. Another example is the addition of
activities to resolve conflicts as risk and uncertainty move the activities around in the network,
such as a swap-out of a flight unit for a development unit should a flight unit be delayed due to
risk.
An Analysis Schedule should reflect the appropriate level of fidelity, such that it is directly traceable to
the IMS, replicates the critical paths, and emulate the IMS under the influence of risks and uncertainties.
Greater detail on how to develop an Analysis Schedule is discussed in Section 6.3.2.3.1.
147
• Activity/Milestone Variances and Schedule Variance (SV). Used to show status or track actual
performance against planned progress of activities/milestones and to feed EVM calculations
over time.
• Activity/Milestone Performance Trends. Used to communicate progress against baseline as
well as changes in the schedule progress month-to-month.
• Baseline Execution Index (BEI), Hit or Miss Index (HMI), and Current Execution Index (CEI).
Used to understand the difference between cumulative performance or performance within a
certain window and planned performance.
• Schedule Performance Index (SPI), Time-based SPI (SPIt), and Earned Schedule. Used to
measure schedule performance against the plan through EVM-type calculations.
• Probability of On-time Delivery of Critical Items. Used to show where P/p management needs
to focus attention and exercise controls based on probabilistic SRA results.
• Risk-based Completion Trend. Used to show where P/p management needs to focus attention
and exercise controls based on probabilistic SRA results over time.
• Margin Status/Sufficiency of Margin. Used to compare margin availability for key activities to
the probabilistic SRA forecasted delivery/completion dates.
• Risk-Based Tracking against the MA and ABC. Used to track the results from periodic ICSRA
against the MA and ABC.
• Confidence Level Reports. Used to communicate the confidence levels associated with possible
dates and/or costs resulting from an SRA/ICSRA (e.g. JCL).
148
5.8 Skills and Competencies Required for Schedule Development
The skills and competencies required for Schedule Development can be found on the SCoPe website.80
Schedule Assessment is the sub-function for determining the validity and integrity of the schedule, and
Schedule Analysis is the sub-function for evaluating the magnitude, impact, and significance of P/p
uncertainties and risks. The sub-functions are complementary: Schedule Analysis cannot be performed
without first assuring that the schedule has been validated through Schedule Assessment. Schedule
Assessment tests the credibility of the schedule, whereas Schedule Analysis tests the likelihood of
accomplishing the P/p’s technical goals on-time, within budget, and with acceptable risk. The P/p
should integrate these sub-functions with other P/p management functions to ensure consistency
among programmatic products.
As Figure 6-1 shows, the Schedule Assessment sub-function is initiated when the Schedule Database is
first developed and whenever a change is made. Changes to the schedule may occur as part of the
Schedule Development, Maintenance, or Control sub-functions. The changes can be formal or informal
and internal or external. For example, changes orders can be generated by processes external to this
process (e.g., scope change), or internally generated to correct for findings from either of the two sub-
functions.
The execution frequency of these sub-functions is dependent on the how often the risk posture changes
and how often changes are made to the schedule. In addition, many factors affect the level of
penetration required for Schedule Assessment and Analysis, such as:
• Technical risk levels
• Amount of confidence in the performing organization’s management abilities
• How well the PP&C processes are defined and followed
• Public visibility of P/p and impact of failure
• Design complexity, manufacturing complexity, and the ability to be produced
• Value of asset
• Past cost and schedule performance
Figure 6-2 illustrates the relationship of these factors with the level of Assessment and Analysis
required.
150
Figure 6-2. Frequency of reviews is a function of several management and technical issues.
The goal for applying the appropriate schedule insight penetration strategy is to enhance the probability
of mission success for NASA P/p. Mission success must be achieved within the workforce, budget, and
time limitations that are levied through all phases of development and operation. Analysis begins with
an assessment of the complexity, maturity, and risks of the P/p being evaluated. Different levels of
insight are appropriate when taking these factors into consideration throughout the P/p life cycle.
By consulting NASA’s SCoPe and GAO guidance, the Agency has constructed classes of standards it uses
to parse and measure the independent dimensions of schedule reliability. These classes are designed to
minimize, to the greatest extent possible, mutual overlap and redundancy; this condition allows the
assessment process to unfold simply and modularly while preserving its precision and power. NASA has
therefore identified the following schedule reliability dimensions, each of which are defined by the
following questions:
• Comprehensiveness → Is the entire scope captured?
• Construction → Is the schedule topologically healthy?
• Realism → Is the schedule’s data cogently justified?
• Affordable → Is the schedule executable given its financial context?
The goal of Schedule Assessment and Analysis is to satisfy each reliability dimension by fully addressing
each question. To accomplish this methodically, each dimension is further divided into sub-dimensions
that are more discretely mapped to actionable procedures. The dimensions are partitioned into sub-
dimensions as shown in Figure 6-3:
151
Figure 6-3. Dimensions of Schedule Reliability and their sub-dimensions.
To answer each dimension’s characteristic question, P/Ss must address the set of questions that satisfy
each sub-dimension. The comprehensive trace from each reliability dimension and sub-dimension,
through question sets, to the appropriate Schedule Assessment and Analysis procedures comprises the
Schedule Reliability Matrix. As shown in Figure 6-5 below, this tool condenses the complete lexicon of
the schedule assessment domain into a clean map of the Schedule Assessment process as shown in
Figure 6-4.
152
Reliability Sub- Assessment Procedure
Dimensions dimensions
Assessment/Analysis Questions st nd rd
1 Tier 2 Tier 3 Tier
Comprehen-
~Is the P/p's current technical portfolio of content, driven by the latest set of
► Requirements
siveness
Content Breadth requirements and plans, completely captured across the life cycle?
~Are the controlled elements captured? Check
~Is the scheduled adequately detailed to fully support reporting and analysis?
Level of Detail ► Critical Path
~Is the schedule overly detailed to the detriment of usability?
► Health Check and Structural
Critical Path ~Is the end-to-end schedule network constructed soundly?
Construction
~Are all possible critical paths integrated correctly into the schedule?
Check
Construction
Vertical ~Does the schedule map cleanly to the various levels of the P/p's Work ► Requirements
Traceability Breakdown Structure? Check ♦ Schedule Risk
~Are schedule summarization methods applied uniformly across the schedule?
Analysis
Justification of ~Is the analogical, performance, or expert-rendered basis for every discrete
Discrete structural element justified cogently?
Schedule ~Is schedule progress measured against the baseline?
Elements ~Are changes from previous versions of the schedule adequately explained?
► Basis Check
Realism
Schedule Risk & ~Are all significant probabilistic schedule events identified?
► Risk ID &
Opportunity ~Is their placement within the schedule's structure well-understood?
Treatment ~Is there a justified basis for risks' and opportunities' parameters? Mapping Check
Schedule Margin ~Given schedule uncertainty, risks, and potential future schedule cases, are the
Adequacy magnitude and allocation of schedule margin sufficient?
Phased
Affordability
Each of these question sets drives one or more Schedule Assessment and Analysis procedures, ordered
by logical precedence and defined by tiers. The 1st and 2nd Tiers provide the foundation for Schedule
Assessment, while the 3rd Tier guides Schedule Analysis as follows:
❖ 1ST Tier
✓ Requirements Check. Assesses the schedule’s compliance with Agency and P/p
requirements.
✓ Health Check. Assesses the schedule’s overall integrity by gauging its health aligning
with various general best practice categories.
✓ Risk ID & Mapping Check. Assess the existence and comprehensiveness of P/p schedule
risks and their placement within the schedule’s structure.
❖ 2nd Tier
✓ Critical/Driving Path and Structural Check. Assesses the structural quality & fidelity of
all possible critical paths and driving paths and compliance with horizontal tractability
standards. Depends upon a satisfactory Health Check.
153
✓ Basis Check. Assesses the justification of each discrete schedule element, including risks.
Depends, in part, upon a satisfactory Risk ID & Mapping Check.
✓ Resource Integration Check. Affirms that P/p’s budget, workforce, and cost estimates
at any point in the P/p life cycle map to the corresponding IMS.
❖ 3rd Tier
✓ SRA. Using SRA or alternative risk-adjusted schedule analysis, measures the required
schedule margin implied by the schedule and assesses its adequacy.
✓ ICSRA. Using ICSRA or alternative integrated analyses, measures the required schedule
margin implied by the schedule and assesses its adequacy and the adequacy of phased
budgets to cover P/p cost estimates and discrete, risk-related costs.
SM.AA.1 Schedule • The schedule is assessed and analyzed in accordance with the Schedule
Assessment and Analysis Management Plan.
Follows the SMP
SM.AA.2 Requirements • Requirements Checks are routinely performed to ensure that the IMS is
Checks are Routinely compliant with Agency and P/p requirements.
Performed
SM.AA.3 Schedule Health • Schedule Health Checks are routinely performed to ensure that the
Checks are Routinely schedule mechanics are not causing the schedule to calculate incorrectly,
Performed and Errors and any errors are investigated and corrected.
Investigated
SM.AA.4 Risk Identification • Risk Identification and Mapping Checks are routinely performed to ensure
and Mapping Checks are that the schedule risks are comprehensively identified and mapped
Routinely Performed appropriately to the IMS.
SM.AA.5 Critical Path and • Critical Path and Structural Checks are routinely performed to affirm the
Structural Checks are integrity of the IMS’s overall network logic flow, including each potential
Routinely Performed critical path.
SM.AA.6 Basis Checks are • Basis Checks are routinely performed to affirm the quality of the estimate
Routinely Performed associated with each discrete schedule element.
SM.AA.7 Resource Integration • Resource Integration Checks are routinely performed to affirm that the
Checks are Routinely P/p’s budget, workforce, and cost estimates map to the P/p IMS.
Performed
SM.AA.8 SRA/I7SRA is • The Schedule Risk Analysis or Integrated Cost and Schedule Risk Analysis
Developed Using Appropriate (SRA/ICSRA) is developed using appropriate tools.
Tools
SM.AA.9 Programmatic Data • Programmatic products and data utilized to perform the SRA (or ICSRA)
Products Used for the provide an adequate representation of the P/p plan and are consistent with
each other with respect to their status dates.
154
SRA/ICSRA Represent the P/p
Plan
SM.AA.10 IMS is the • The Integrated Master Schedule (IMS) is used as the framework for the
Framework for the SRA SRA, when feasible.
SM.AA.11 Schedule Duration • Schedule duration uncertainties are quantified with respect to appropriate
Uncertainties are Quantified in activities for inclusion in the SRA.
the SRA
SM.AA.13 Discrete Schedule • Discrete risks are assessed and quantified with respect to schedule impacts
Risks are Assessed and for inclusion in the SRA.
Quantified for the SRA
SM.AA.14 Discrete Schedule • Discrete risks with schedule impacts are mapped to appropriate activities in
Risks are Mapped to the SRA.
Appropriate Activities in the
SRA
SM.AA.15 Discrete Risk SRA • The discrete risk inputs to the SRA Model are reviewed to ensure that they
Inputs are Tested and Verified are captured and calculating correctly prior to running the simulation, and
Prior to Simulation that they represent the intended model of the schedule risk.
SM.AA.16 IMS is the • The IMS is used as the framework for the ICSRA, when feasible.
Framework for the ICSRA
SM.AA.17 Cost Model that • A cost model that replicates the P/p cost estimate is defined and formatted
Replicates the P/p Estimate is for the purposes of performing the ICSRA.
Defined and Formatted for the
ICSRA
SM.AA.18 Costs (or • Costs (or resources) are mapped to appropriate level of activities in the
Resources) are Mapped to ICSRA.
Appropriate Activities in the
ICSRA
SM.AA.19 Cost Uncertainties • Cost uncertainties are quantified with respect to appropriate resources for
are Quantified in the SRA inclusion in the SRA.
SM.AA.21 Discrete Cost Risks • Discrete risks are assessed and quantified with respect to cost impacts for
are Assessed and Quantified inclusion in the ICSRA.
for the ICSRA
SM.AA.22 Discrete Cost Risks • Discrete risks with cost impacts are mapped to appropriate activities in the
are Mapped to Appropriate ICSRA.
Activities in the ICSRA
155
SM.AA.23 Cost ICSRA Inputs • The costs, cost uncertainties, and discrete cost inputs to the ICSRA Model
are Tested and Verified Prior are reviewed to ensure that they are captured and calculating correctly
to Simulation prior to running the simulation, and that they represent the intended
model of the cost risk.
SM.AA.24 SRA/ICSRA Inputs • The SRA/ICSRA inputs are reviewed through an initial simulation run to
are Tested and Verified ensure that they are captured appropriately and calculating correctly
through Initial Simulation through the outputs.
SM.AA.27 SRA/ICSRA is • SRA/ICSRA is routinely performed for risk sensitivity analysis and risk
Performed for Risk Sensitivity prioritization throughout the P/p life cycle.
Analysis and Risk Prioritization
SM.AA.30 ICSRA is Performed • ICSRA is performed to produce a joint confidence level (JCL) for cost and
to Estimate P/p Joint schedule associated with achieving planned cost and schedule
Confidence Level commitments (at least as often as required).
SM.AA.31 SRA/ICSRA is • SRA/ICSRA is routinely performed throughout the P/p life cycle to establish
Performed to and allocate margin within the preliminary and/or baseline IMS, as well as
Establish/Allocate Margin to routinely throughout the P/p life cycle, to ensure sufficiency of margin to
Accommodate Uncertainty accommodate uncertainties and risks.
and Risks
SM.AA.32 ICSRA • ICSRA is routinely performed to demonstrate that the schedule of activities
Demonstrates Schedule is consistent with the funding/phasing strategy.
Consistency with
Funding/Phasing Strategy
SM.AA.33 ICSRA • ICSRA is routinely performed to demonstrate the cost reserves are
Demonstrates Sufficient Cost sufficient to accommodate schedule delays caused by anticipated
Reserves uncertainty and risk impacts.
156
Figure 6-5. Schedule Assessment and Analysis Best Practices.
While robust in satisfying the above points, Schedule Assessment does not guarantee that the IMS will
produce the desired products on dates required by sanctioned plans and stakeholders but ensures that
the IMS is sufficiently reliable to support reporting to stakeholders and the generation of analytical
insight. The sum of assessment activity evidence, uncovered progressively over time across iterations of
the assessment process, is collected in a Schedule BoE Dossier, herein referred to as the “BoE”, which
guides schedule evolution via specific improvements.
6.2.1 Prerequisites
Schedule Assessment can be initiated when:
• The SMP sub-plan, Schedule Assessment and Analysis Plan, which specifies the techniques used
to determine the compliance of the IMS to requirements and sets of best practices, is available
• The Schedule Database is complete
• An IMS has been generated from the Schedule Database as a new output or update since the
last assessment iteration
• Milestone Registry is available
• Risk register is available
• Relevant schedule data from subcontracts and agreements are available
• The schedule BoE, created during Schedule Development and capturing the schedule’s basis
rationale, is available
157
6.2.2 Perform Schedule Assessment
As a result of studying all possible dimensions of schedule quality by Agency practitioners, NASA has
chosen reliability as its umbrella measure of schedule ‘quality’. Reliable schedules are soundly reasoned
along all dimensions, best suiting them to enable correct and earnest analysis, control, and reporting.
Schedule Assessment should be pursued within the frame of continuous programmatic dialogue,
thereby fostering informational discovery, exchange, and capture within the BoE.
As defined earlier in Section 6.2, the execution of the 1st and 2nd Tier procedures in a prioritized-yet-
iterative fashion as the P/p matures defines the Schedule Assessment process, enabling collective and
documented understanding of the IMS through disciplined dialogue and investigation. Though the
constituent procedures are ordered by dependence-based precedence, each is not necessarily a discrete
activity, especially those that are seminal. (See the 1st Tier assessment procedure set in section in
section 6.2.2.1.) Schedule Assessment should develop progressively and iteratively over time, increasing
in scope gradually as foundational issues are uncovered and resolved. It should also adapt to the needs
of stakeholders and P/Ss throughout the P/p life cycle.
Schedule Assessment should not seem foreign or superfluous to P/p leadership and programmatic
personnel since it arises naturally as a product of P/p maturation and self-examination, driving BoE-
documented schedule justifications and introducing potential improvements. By design, it is flexible
enough to be performed by a variety of parties (e.g., P/Ss, Schedule Analysts, independent assessors,
and other stakeholders) early and throughout a P/p’s life cycle, beginning with preliminary schedule
development and iterations prior to baseline approval. During these initial stages, the P/p management
team and various support personnel must thoroughly review and evaluate the early iterations of the IMS
to ensure each one accurately reflects how the P/p effort will be formulated and implemented. It is also
important for P/p leadership to ensure that the schedule continues to reflect the implementation plan
as it is updated throughout the life cycle as a result, in part, of the continuously performed Schedule
Assessment process and its interdependent Schedule Management sub- functions, primary among them
Schedule Analysis, described in Section 6.3, and Schedule Maintenance and Control, as described in
Chapter 7.
158
Figure 6-6. Overview of the Schedule Assessment (and Analysis) process.
As depicted, Schedule Assessment generally proceeds in order across each procedure tier. Regardless of
who is spearheading the process (e.g., the P/S or an Independent Assessment’s programmatic assessor),
the results of each procedure iteration should augment the schedule’s preexisting BoE with a collection
of results and related investigational evidence accompanied by an integrated assessment narrative. A
schedule is nothing more than an estimate itself; a schedule BoE documents the schedule basis rationale
– and justification for best practice compliance and departures – in service of satisfying the reliability
dimensions, various Agency requirements, and ultimately, NASA’s Schedule Management body of
knowledge.
As a living product of the Schedule Development process, the BoE serves as a roadmap for schedule
evolution and primary agent of change. After each Schedule Assessment iteration, the BoE is evaluated
to determine the best course of action for the corresponding versions of the IMS. (Each BoE update
marks which incarnation of the schedule it addresses.) Very often, the majority of findings can be
addressed by virtue of the normal Schedule Maintenance cadence. Many situations warrant P/Ss and
Technical Leads, CAMs, or WVS Element Owners, and possibly P/p management, agreeing on
incremental changes that do not require formal intervention by other parties. However, if the BoE
illustrates serious non-compliances or defects that require more generalized awareness, leadership
involvement, and adjudication, or a major change such as a replan or a rebaseline is forthcoming, the
change control process should be engaged as outlined in Section 7.3.1.
159
The remainder of this chapter is devoted to the specific definition and discussion of each schedule
assessment procedure, associated questions, and assessment artifacts that are appropriate for inclusion
in the P/p’s BoE.
These 1st Order Tier procedures are particularly valuable during Schedule Development and its
immediate aftermath since they directly enhance the schedule BoE stood up during the formative
moments of the IMS as its foundation. As a result of these activities, the schedule BoE, having been only
recently initialized, should mature into the form it will carry throughout the performance of each higher
tier and future Schedule Assessment iterations.
If for any reason during the early phases of Schedule Development, the BoE was not prepared to
accommodate the findings associated with the 1st Tier assessment procedures, the P/S must lead the
alignment between the BoE and each procedure’s requirements.
160
6.2.2.1.1.1 Step 1. Verify that the IMS Reflects the Breadth of Content Dictated by Requirements
Procedure 1 of the 1st Tier opens with a step that sets the analytical foundation for the assessment
process as a whole: checking the schedule against its most fundamental influence - the suite of
requirements applicable to the P/p. These include, but are not limited to:
• Authoritative P/p technical requirements documents
• P/p Plans, including domain-related plans
• SMP, including schedule guidance and GR&As
• Other P/p GR&A documents
• Milestone Registry
• Technical hierarchy documents, including the WBS, OBS, and CBS
• Applicable budget cycle documents
• Subcontracts, along with relevant stipulations
• Official external agreements, including international partnership agreements (e.g., MOUs,
MOAs, etc.)
• Stated, recorded stakeholder directives
This set of requirements and supplementary information shape the breadth of technical and
programmatic scope. The P/S should take special care to verify the full scope is reflected in the IMS,
which should contain the complete end-to-end networked flow of work necessary to reach the terminal
milestone, as described in Sections 5.5.3 and 5.5.6. GAO affirms this simply by stating:
The IMS should reflect all effort necessary to successfully complete the program,
regardless of who performs it.81
During this procedure, the P/S should check that the IMS incorporates the areas of content and features
according to the scope of work set forth by the P/p’s requirement set, as described Figure 6-7 for Space
Flight P/ps and Figure 6-8 for Research and Technology P/ps.
81 GAO-16-89G. GAO Schedule Assessment Guide. Page 11. December 2015. http://www.gao.gov/assets/680/674404.pdf
161
Risk & Schedule Margin Content
Full Life Cycle of Work
Major Interdependencies Tasks, Milestones, and Network
Tasks, Milestones, and Network Logic associated with:
Logic associated with:
Verification, Validation,
Procurement Qualification, Acceptance &
including activities associated with Certification Tasks Intra-P/p Interdependencies
acquisition of hardware (e.g. long- including also test facility builds
Risk Mitigation Tasks and Milestones
lead items) and services (e.g. and major tests
appropriately located within the
launch vehicles), including major Mission Operations
schedule structure and regularly
Content Breadth
Figure 6-7. Matrix for Assessing the Breadth of Schedule Content included in an IMS for Space Flight P/ps.82
Formulation
including activities associated with acquisition, establishment of Intra-P/p Interdependencies Risk Mitigation Tasks and Milestones
Content Breadth
requirements, and preparation of P/p plans and control systems appropriately located within the
Implementation schedule structure and regularly
including activities associated execution of development plans, External Interdependencies updated with risk evolution
operations plans, and control systems
Evaluation
including activities associated with self-reviews, independent Schedule Control Elements Schedule Margin
assessment, and findings acceptance and incorporation into P/p plans in the form of BOE-justified tasks with
durations, explicitly allocated or
Major Reviews and Milestones otherwise situated appropriately
Control and Notification
including activities that precede and support PRSs and PARs (for within the schedule structure
Milestones
programs), PPRs and CAs (for projects), and KDPs
Time Scope
-Complete temporal span of the P/p beginning at ATP and ending at terminal milestone such as hardware on-dock or launch.
-Pre-phase A and Phase E/F content should be included as deemed available and applicable.
Figure 6-8. Matrix for Assessing the Breadth of Schedule Content included in an IMS for Research and Technology P/ps.83
82
Missing risks and their mapping are handled by Procedure 3.
83
Missing risks and their mapping are handled by Procedure 3.
162
Within the schedule BoE dossier and within the IMS itself via custom fields (see sections 5.3.5 and 5.5.2),
the P/S should examine (or create) a trace from the schedule content, in the form of tasks, milestones,
logical links, or other schedule elements, to the appropriate requirement or authoritative document.
This trace should be two-way: any requirements not manifested as content in the schedule should be
noted in the BoE dossier’s missing content registry for disposition.
It is also important that the schedule temporally span the life of the P/p, which includes the activities
near the termination of the overall effort. Often, the proper breadth of content is represented well in a
schedule’s near-term window but loses fidelity (or is lost entirely) for future planned tasks. This is
especially common in schedules that adhere to the rolling wave approach. The P/S should investigate
what is missing downstream in the IMS, as dictated by the requirement set, in additional to their
attention on the upstream elements. This step should be performed regardless of schedule detail,
though, in many cases, less detailed network structure associated with the later stages of a P/p also
lacks comprehensive breath in content. The P/S should further augment the BoE dossier’s registry of
missing content with those elements lost due to the improperly rendered temporal scope.
Among the most crucial elements are those associated with schedule control: notification and control
milestones, as well as elements that fall within close proximity to them. As a formative effort, the P/S
should take special care of noting, within the BoE, which of these items have not been incorporated
accurately into the IMS, since they represent the essence of a baselined and otherwise management-
dictated schedule. A schedule that informationally lags the true placement of controlled elements
often, at best, loses its usefulness as a reporting and analytical tool and, at worst, clouds the schedule
picture and misleads stakeholders.
As a best practice, the document that most often and relevantly bears upon P/p schedules’ vertical
traceability is the P/p’s WBS, as described in Sections 5.3.1 and 5.5.7, since it captures completely the
products being developed, produced, and reflected within the schedule, along with their nested
configurations. GAO also espouses this best practice and comments as follows:
At its summary level, the IMS gives a strategic view of activities and milestones
necessary to start and complete a program. At its most detailed, the schedule clearly
163
reflects the WBS and defines the activities necessary to produce and deliver each
product.84
The P/p’s WBS should closely map to the IMS if not match its structure outright due to the documents’
common calibration to work products. To that extent, the verification of proper vertical traceability
within the IMS should also, in part, verify that the WBS is sensibly structured and captures the full scope
of work. In lieu of a fully defined WBS, especially during early P/p stages far prior to the approval of a
schedule baseline, the P/S should be mindful of developing realities and use the full suite of available
programmatic documents to ascertain the extent of reasonable application of vertical traceability.
In many cases, the mapping of the schedule to the WBS will be covered by Step 1 of this procedure.
However, often, P/p schedules map to authoritative documents like the WBS but in inconsistent ways.
Regardless of whichever selection of programmatic plans or documents lends a tiered structure to the
IMS, P/Ss should note the uniformity of summarization methods’ application across the schedule and
should augment the BoE with a WBS-to-IMS map and a hierarchical comparison, if one is not already
being maintained.
Exit Criteria. The Requirements Check procedure is satisfied when the P/S has completed the following
tasks:
✓ Verify that the IMS reflects the breath of content dictated by requirements.
✓ Verify that the IMS is vertically traceable.
✓ Mark the BoE with each data source’s date according to the current schedule and
assessment iterations. Alongside this, include the next steps of schedule evolution and a
corrective action recommendation when organic schedule improvement is not sufficient.
Procedure Maturation. Over time, the BoE should collate a list of missing content and errors in
schedule hierarchy. By iterating the assessment process during the beginning stages of the P/p’s life
cycle, the P/S should evolve the IMS to a point at or shortly after requirements have stabilized at which
there are no significant P/p elements are missing from the schedule. At or before this point (ideally well
before), a high degree of WBS-driven vertical traceability should be reflected within the IMS.
Create, update or verify the requirement-to-IMS-element map, citing P/p requirements and related
authoritative documents. This will at least include, in most cases, a WBS-to-IMS map.
6.2.2.1.2 Procedure 2. Health Check
It is a best practice for routine Health Checks to be performed on the IMS to generally ensure that the
schedule mechanics are not causing the schedule to calculate incorrectly, and any errors are
investigated and corrected.
84 GAO-16-89G. GAO Schedule Assessment Guide. Page 11. December 2015. http://www.gao.gov/assets/680/674404.pdf
164
o Sub-dimension: Level of Detail
➢ Is the schedule adequately detailed to fully support reporting and analysis?
➢ Is the schedule overly detailed to the detriment of usability?
o Sub-dimension: Critical Path Construction
➢ Is the end-to-end schedule network constructed soundly and vertically traceable?
➢ Are all possible critical paths integrated correctly into the schedule?
Procedure 2, another in the 1st Tier, pivots from the Requirements Check’s pure content focus to an
examination of the IMS’s mechanics. The Requirements Check makes sure the proper ‘what’ is included
in the schedule while the Health Check tests ‘how’ that content is expressed. To do so, this procedure
assesses the IMS against a set of predefined, community-vetted metrics designed as indications of
potential schedule construction issues. As such, the Health Check, properly executed, provides solid
ground for deeper assessments of the schedule’s task-logic flow, such as the 2nd Tier’s Structural and
Critical Paths Check.
There are two general classes of schedule Health Checks tools: MS Project add-ons85 like NASA’s
Schedule Test and Assessment Tool (STAT) and standalone scheduling analysis software packages, such
as Polaris and Deltek Acumen Fuse.86 These tools generally replicate or adhere closely to schedule
community-accepted standard metric sets and are often tailorable to fit organization-specific metrics.
Key indicators that conform to these industry standards are shown in Figure 6-987:
85 The NASA Schedule Test and Assessment Tool can be requested through the NASA Software Catalogue managed by the
Marshall Technology Transfer Office, https://software.nasa.gov/software/MFS-33362-1
86 Additional schedule health check tools are referenced in the Agency Schedule Management Tool Matrix located at the SCoPe
website, https://community.max.gov/x/9rjRYg.
87 Descriptions of key indicators were compiled from the NASA Schedule Test and Assessment Tool (STAT); “SAA Schedule
Assessment and Analysis”, NASA, February 14, 2013; the Deltek Acumen Fuse tool; and the DCMA Manual 3101-02: Program
Support and Analysis Reporting, http://www.dcma.mil/Portals/31/Documents/Policy/DCMA-MAN-3101-02.pdf
165
Key Indicator /
Description
Metric
Tasks
This metric counts the number of activities that have a duration longer than two months.
High duration activities are generally an indication that a plan is at too high a level for
adequate planning and control. Activity durations should be realistic and measurable.
NASA’s general use of rolling wave planning implies that near-term tasks are planned to a
High Durations
lower, discrete level of detail; whereas tasks scheduled to occur farther into the future
may be planned may be planned at a more summary level of detail, when task details are
as yet unknown. More discussion of schedule detail as it relates to rolling wave planning
can be found in Section 5.5.7.3.
This metric counts the number of activities that have past due tasks and milestones with
no revised forecast dates, i.e., the “status-as-of” date is too far in the past to be
meaningful. For Schedule Assessment and Analysis purposes, it is helpful for all activities
Incomplete Task in the schedule to be statused with respect to the same date. (A note of caution: Some
Status scheduling software will allow incomplete tasks/milestones to remain in the past with no
revised forecast dates. Omission of task status reduces schedule credibility, thereby
hindering accurate float calculations, critical path identification & analyses, and also task
start/finish projections.)
This metric counts the number activities with high float. Schedule paths with high
amounts of float often arise due to artificially constrained activities. In conjunction with
margin analysis, float analysis provides key information for P/p management decision
High Float making. It is typically suggested that activities on paths with high float be considered for
acceleration, prioritizing the activities with the highest amount of float. Further
discussion of float can be found in Section More discussion of schedule detail as it relates
to rolling wave planning can be found in Section 5.5.11.
This metric counts the number of activities that have slipped from their baseline dates.
Missed Tasks This is an indicator of execution performance and includes the activities that have
completed or will complete after their baseline dates.
This metric counts the number of activities with actual dates in the future. Activities
cannot be statused into the future. Statusing activities into the future could lead to
Invalid Actual Dates erroneous dates in the schedule. Invalid Actual Dates reduces schedule credibility,
thereby hindering accurate float calculations, critical path identification & analyses, and
also task start/finish projections.
This metric counts the number of activities planned in the future with status in the past.
Invalid Forecast
It is impossible to have future planned activities with status prior to the time now date.
Dates Statusing future planned activities could lead to erroneous dates in the schedule.
Activities
Improperly This metric counts the number of activities that have a duration equal to zero days
Reflected as and/or are coded as milestones. If work/effort is involved, activities should have
Milestones/Overuse durations greater than zero days.
of Milestones
Tasks without a
All tasks should have stated baseline duration against which performance can be
Baseline and
measured. This metric is a straight sum of tasks missing such information.
Progress Status
166
All similar tasks, grouped by WBS or some other guidance, should roll up to a summary
Inconsistent Vertical
level in a similar fashion. This metric measures how many of the tasks fail to do so and
Integration of Tasks should be aligned in the schedule hierarchy with like activities.
Tasks with Missing All tasks' fields should be filled with current information. Those with missing field values
Field Information should be noted and investigated.
Critical path cannot include LOE tasks because they do not capture measurable work.
LOE Tasks on the
The metric is a sum of LOE tasks that should be removed from current or potential
Critical Paths critical paths.
Logic and Lags
This metric counts the number of 'dangling' activities that are missing a predecessor, a
successor, or both. These activities are often called “dangling activities”. All schedule
activities should have at least one predecessor and successor (although there are a few
Missing Logic/Open exceptions noted in Section 5.5.8.1, which include P/p start and finish, external
Ends deliveries, etc.). Failure to incorporate at least one predecessor and successor for each
activity will impact the ability of the schedule to calculate properly, which may lead to
improper float/ critical path calculations, also preventing credible SRA/ICSRA and “what-
if” analyses.
This metric counts the number of activities with logic relationships other than FS. Logic
Logical other than FS adds complexity to the schedule, potentially inhibiting clear identification
Relationships other of the critical path. It is important to ensure that any use of logic other than FS
than FS relationships are justified. Appropriate uses for each type of logic relationship is covered
in Section 5.5.8.1.
Logic links on summary tasks are typically viewed as a poor scheduling technique, as the
Logic Applied to
summary is not a true activity but instead a grouping of activities. Logic should be tied to
Summary Activities the actual work in the schedule.
This metric may include activities with circular logic or reverse logic. Circular logic often
Improper Logic occurs in multi-project schedules. Reverse logic is typically the result of a lead, where
successor activities are scheduled to start before their predecessors.
A redundant link occurs when in addition to the link in question, there is a more detailed
logic link between the same two activities. For example, a link from Activity A to Activity
C is made redundant by an existing link from Activity A to Activity B and another one
Redundant Logic from Activity B to Activity C. While redundant logic will not always create schedule
calculation issues, it can make performing schedule updates as part of the Schedule
Maintenance sub-function more cumbersome. It also makes logic traceability, such as
critical path traces, more difficult for Schedule Assessment and Analysis.
This metric checks for clashes between logic and progress/status updates. If a successor
activity is in progress or complete before the predecessor activity has started (tied with a
Out-of-Sequence
FS link), then either the status or the logic is wrong. Out-of-sequence task cause
Logic questionable total float calculations for the tasks involved and may prevent identification
of the critical path. This may be an indicator that the level of task detail is not sufficient.
167
This metric counts the number of leads and/or lags in the schedule. Leads are often used
to adjust the successor start or end date relative to the logic link applied, which can
result in the successor starting before the predecessor. Lags are positive durations or
Improper use of delays associated with logic links, which tend to hide detail in schedules and cannot be
Leads/Lags statused like normal activities. Furthermore, uncertainty distributions cannot be applied
to lags for the purposes of SRAs, potentially underestimating schedule impact due to
uncertainty. Lags should be replaced with activities. Further discussion of leads and lags
can be found in Section 5.5.8.2.
Figure 6-9. Typical metrics and key indicators that are used to assess the schedule’s health and mechanics.
These metrics are general guides and are not meant as exhaustive. Further, these metrics need to be
carefully investigated to mitigate any potentially misleading conclusions. The P/S’s understanding
technical and topological understanding of the schedule, as it develops over time and data exposure,
will determine which metrics matter the most.
168
6.2.2.1.2.2 Step 2. Interpret and Assess the Automated Health Check Report
The results of each automated Health Checks are tantamount to the beginning of mechanical analysis of
the IMS. The P/S should use the Health Check Reports as a roadmap for investigation into root causes
towards schedule improvement. An example Health Check Report illustrating potential avenues of
investigation is shown in Figure 6-10:
Figure 6-10. Typical output from the STAT Health Check tool.
The color-coding provides a first look at whether or not certain aspects of the schedule are “healthy”.
However, additional scrutiny must be exercised to understand whether “green” actually indicates
“healthy” and “red” or “yellow” actually indicates “unhealthy.” For example, the measurement for
“Constraints other than ASAP”, could be green, but it only takes a single “Must Finish On” constraint to
invalidate the entire schedule. The Schedule Health Check provides a separate tab that lists every
constraint that is not ASAP. Each non-ASAP constraint should be carefully investigated for validity and
whether the constraint is causing an erroneous calculation of float. Likewise, the missing predecessors
and successors must also be investigated individually to make sure that a critical linkage is not missing.
A missing link may invalidate the critical path calculation.
169
For each metric, the P/S should recruit the appropriate expertise and informational resources to either
justify the apparent deviation from the accepted standard or identify root causes and craft a plan for
Schedule Maintenance. The P/S, who often may find that a defined, yet informal change approach is
appropriate, should lead the continuous schedule health improvement effort and track its progress
through completion. This will necessarily involve executing other Schedule Assessment procedures,
especially those from the 2nd Tier. This strategy may identify serious non-compliances or defects that
require more generalized awareness, leadership involvement, and adjudication, or a major change such
as a replan or a rebaseline. It these instances, the formal change control process should be engaged as
outlined in Section 7.3.1.
Regardless of the schedule health remediation strategy or formality, the P/S should record each Health
Check Report, an explanation of each non-compliant metric’s value (including justifying source data or
expert opinion), and a reparation plan within the BoE as part of the larger narrative.
Exit Criteria. The Health Check procedure is satisfied when the P/S has completed the following steps:
✓ Execute an automated Health Check.
✓ Interpret and assess the automated Health Check Report, investigate root causes, and
address all within a remediation plan housed in the BoE.
✓ Mark the BoE with each data source’s date according to the current schedule and
assessment iterations. Alongside this, include the next steps of schedule evolution and a
corrective action recommendation when organic schedule improvement is not sufficient.
Procedure Maturation. It is incumbent upon the P/S to perform this procedure on a continuous basis
due to its ease and informational value, which support the execution of other higher tier assessment
procedures throughout the life cycle. It is one of the most relatable procedures to stakeholders and
demands a mature BoE narrative that can drive status reporting on a regular cadence. The chronical
within the BoE regarding health and structural issues should be especially cohesive, containing a
narrative (with the appropriate data) prioritizing issue resolution in accordance with the plan for a
progressively healthier IMS.
As with every Schedule Assessment procedure, it is impossible for the Health Check to address all issues
at once, nor should that be the expectation. However, as a goal, this procedure should resolve most
schedule health issues by the time advanced assessment, analysis, and reporting is due to stakeholders
by P/p management necessity or Agency policy directive around the SDR timeframe. Nonetheless, the
Health Check should be performed on a routine basis, as a completely healthy schedule is necessary for
advanced assessments like schedule growth or to support cost or technical reporting, in addition to
being especially critical for more sophisticated analytics, like SRAs/ICSRAs.
170
o Sub-dimension: Schedule Risk (and Opportunity) Treatment
➢ Are all significant probabilistic schedule events identified?
➢ Is their placement within the IMS’s structure well-understood?
Probabilistic schedule events, including risks and opportunities, play an essential role in qualifying the
P/p’s programmatic posture. Without understanding these potential events, schedule reporting and
analysis is incomplete and, often, irrevocably incorrect. The Risk Identification & Mapping Check, a
successor of the Requirements Check, is the first of two risk-related assessment procedures (see also
Basis Check) meant to ensure that probabilistic schedule events are properly treated as non-optional
schedule components intimately linked to the IMS’s network of elements and, in turn, the larger
programmatic story.
These tasks regard the basic bookkeeping of identified probabilistic schedule events within the P/p’s
management and assessment process frameworks alike. After these are completed, the P/S should turn
his or her attention to those significant items not identified in the risk list. Since these unidentified risks,
to the extent that they are at all discoverable, are uncovered primarily by executing deeper 2nd Tier
assessment procedures that regard the basis and evolution of schedule and risk data, this procedure
stipulates that the P/S only record within the BoE’s risk list those uncodified risks readily identifiable via
review of information on-hand, including findings previously rendered by technical experts and from
insight generated by the other 1st Tier assessment procedures - the Requirements Check and Health
Check. (A more in-depth round of investigation and consultation with technical experts within or
independent of the P/p is entailed by the Basis Check.) The final Step 1 task is as follows:
✓ To the extent possible using freely available P/p information and documentation, augment the
BoE’s risk list with previously unidentified, discrete items representing events of significant
probability and consequence to elements within the IMS. Mark them as candidate risks or other
P/p utilized designation and include a descriptive narrative.
It is worth discussing here the other types of special probabilistic schedule events that may warrant a
P/S’s attention. Within the IMS itself, it may be appropriate for the P/S and/or Schedule Analyst to
171
include probabilistic branching events tied to two or more possible downstream paths, each with
associated probabilities, flowing from a common point of departure. These techniques are described in
Section 6.3.2.3.5. Further, there are certain probabilistic events that, if occurring, could necessitate a
P/p replan effort (not necessarily a schedule or P/p rebaselining) entailing a reconstruction of the
schedule in unforeseen ways beyond what risks or probabilistic flows can encapsulate. These
probabilistic events should be listed and justified within the BoE and receive special attention during the
other assessment procedures, especially the Basis Check, described in Procedure 5.
6.2.2.1.3.2 Step 2. Verify that the Risk and Opportunity Placement within the IMS is Well-Understood
by the P/p’ Schedule Team
Continuous routine schedule risk identification provides the basis for the understanding potential
threats to IMS elements. However, schedule risks lose most, if not all, of their informational power if
they are not carefully linked to the appropriate locations within the schedule network. Mapping risks to
schedule plays a crucial role in integrating the RMS and IMS, which are often otherwise managed
completely independently. Thus, helping to protect against the lack of communication and coordination
between two major PP&C domains and their respective data constructs is a major responsibility of the
P/S.
Schedule risks have complex effects on downstream tasks and milestones. These dynamics are not
understood by risk owners unfamiliar with the IMS network; rather, risk owners often assume they can
intuitively extrapolate the effects of schedule risk consequences that cascade beyond a localized set of
tasks and logic, often erroneously citing major events (like a system CDR) as direct targets of risks’
realization. Therefore, having verified collaboration between the P/s’ schedule and risk domains, the
P/S should:
✓ Verify that the schedule risks’ consequences to specific tasks within the IMS have been
mapped in coordination with the Risk Manager and risk owners, using this information to
augment the BoE.
✓ Verify that the mapping scheme embedded within the BoE dossier’s risk list is well-justified.
✓ Verify that the set of tasks associated with each risk’s mitigation plan is embedded within
the IMS.
The above steps should be repeated for discrete opportunities and other probabilistic schedule events
as available.
Exit Criteria. The Risk Identification & Mapping procedure is satisfied when the P/S has completed the
following tasks:
172
✓ Identify P/p’s set of risks & other discrete probabilistic schedule events.
✓ Verify that the risk and opportunity placement within the IMS is well-understood by the P/p
team.
✓ Mark the BoE with each data source’s date according to the current schedule and
assessment iterations. Alongside this, include the next steps of schedule evolution and a
corrective action recommendation when organic schedule improvement is not sufficient.
Procedure Maturation. The P/S should continuously monitor and assist, where possible, the mapping of
newly uncovered risks as they are identified by the Risk Management function in close association with
this procedure and the Basis Check. This is a considerable effort involving many responsible parties and
is best facilitated in a prioritized, progressive manner over time, easing the burden on all involved. All
schedule risk-related updates should be recorded in the BoE to enable schedule risk transparency and
traceability over time.
6.2.2.2.1 Procedure 4. Critical Path (and Driving Path) and Structural Check
It is a best practice for routine Critical Path and Structural Checks to be performed to affirm the
integrity of the IMS’s overall network logic flow, including each potential critical path.
88NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2018. Page 50. https://nodis3.gsfc.nasa.gov/npg_img/N_PR_7120_005E_/N_PR_7120_005E_.pdf
173
The 1st Tier’s assessment procedure results, particularly those rendered after several cycles through the
Health Check, are prerequisites for a more penetrating assessment of the schedule’s structure. While
that procedure parsed schedule elements individually, the following two, logically ordered steps of the
Critical Path and Structural Check serve as approaches to assessing the IMS as an integrated whole.
Tasks connected by logical links are the skeletal structure upon which milestones are situated. This
structure should be horizontally traceable in that every task and milestone should have a link to at least
one predecessor and one successor (unless located at the beginning or the end of the schedule or
represents interim delivery events from or to external entities). GAO comments on the importance of
coherent task and milestone sequencing:
Such links serve to verify that activities are arranged in the right order for achieving
aggregated products or outcomes.89
Horizontal traceability is a necessary but insufficient condition for holistic schedule integrity since any
logical path spanning the schedule may still display defective dynamic behavior through logical
insensitivity to significant schedule modifications (e.g., such as if a dramatic move in time for tasks or
milestones results in no change in total float or downstream milestone dates). To test both of these
conditions, given satisfactory Schedule Health Check results and issue remediation, the P/S should
perform a Shock Test to identify defects in schedule logic flow, including errant constraints, by
examining the change in total float associated with each path terminating at the IMS’s end milestone (or
select interim milestone) due to a change in upstream schedule elements’ duration or temporal
placement. In this way, the structural soundness of all possible current and potential critical paths is
verified at once. A schedule that successfully passes the Shock Test can be characterized generally as
both healthy (though some issues may remain given Schedule Health Check indicators) and structurally
sound. Even at early stages of development, this is an achievable goal for the P/S.
6.2.2.2.1.1 Step 1. Perform a Shock Test on the IMS to Verify its Overall Structural Integrity via
Horizontally Traceability and Proper Dynamic Behavior
Shock Test Process
1. Setup
✓ Select tool that allows easy manipulation of task durations. This could be native
schedule software such as MS Project or any other user-friendly package. Since this test
involves experimenting with task durations, the P/S may find it easier to work with
simulation software that can easily manipulate the durations of one or more tasks.
✓ Search the IMS for margin activities and set the durations to zero. Add the existence of
these margin tasks to the BoE structural issues list.
✓ Initialize Task Selection: For the initial iteration of the test, select all tasks that share
the IMS start date or mark the beginning of a logical string.
2. Perturb the Schedule Manually and Uncover Potential Issues for Investigation
89 GAO-16-89G. GAO Schedule Assessment Guide. Page 71. December 2015. http://www.gao.gov/assets/680/674404.pdf
174
✓ Artificial Schedule Compression Step
➢ For each task within the selected set in turn, zero out its duration. Follow the flow
of tasks downstream until an eventual successor doesn’t move backwards in time.
Note either that every downstream task in the string ending in the IMS’s terminal
milestone moved to the left or add the reason why the eventual successor did not
move (due to a logical, flow confluence, constraint, or other reason) to the schedule
BoE structural issues list.
➢ Artificially correct the potential issue so that others may be progressively isolated
during subsequent iterations.
➢ For all such subsequent iterations, successively select the first tasks downstream
from the original selected tasks that did not change position after the last iteration.
✓ Artificial Schedule Slip Step
➢ For each task within the selected set in turn, increase its duration until it becomes
critical, noting which downstream activities move out accordingly.
▪ If the task does not eventually fall on the critical path, then there is a structural
issue with the IMS such as missing logic, hard constraint usage, or an eventual
missing successor. Add the potential issue to the BoE structural issues list.
▪ If the task eventually falls on the critical path but IMS’s terminal milestone date
does not change, there is build-up of negative float somewhere in the network,
likely caused by a hard constraint inhibiting the downstream flow. Add the
potential issue to the BoE structural issues list.
➢ Artificially correct the potential issue so that others may be progressively isolated
during subsequent iterations.
➢ For all such subsequent iterations of this step, successively select the first tasks
downstream from the original selected tasks that did not change position after the
last iteration.
3. Iterate Until the End of the Schedule Is Reached and All Tasks Have Been Shocked
4. Investigate Each Identified Item in the BoE Structural Issues List
✓ Progress through the structural issues list in the BoE, which will likely be well-populated
the first time the Shock Test is performed. The P/S should, as always, investigate all
uncovered issues with schedule owners and other experts, retiring those that have
adequate justification recorded within the BoE. Those that remain will likely have
appeared in Schedule Health Checks prior, may yet be unresolved, and likely (but not
necessarily) fall within one of the following common areas of interest (organized here by
investigatory question):
✓ Float
➢ Do activities that have a large amount of free float have missing or incomplete
logic?
175
➢ Does the confluence of unrelated task strings at node and its effects on float (or
‘margin’, in some programmatic contexts) make sense? Is the IMS’s total float
affected unexpectedly by these inflowing activities? Were the individual flows
rationally derived?
✓ Logic
➢ Are there any obviously incorrect activity relationships? Are all known logic
interdependencies amongst segments of work represented within the IMS?
➢ Are the schedule activities sequenced in a logical manner to complete the given
work scope shaped by the P/p’s set of requirements? (See Requirements Check
assessment procedure.)
➢ Are the relationship types (such as SS of FF) appropriate for the logical flow in
question?
➢ Are tasks missing successors? Is there a point where the downstream flow
terminates unexpectedly or becomes out of phase with the tasks in that area of the
IMS? Are hard constraints causing these issues?
✓ Lags
➢ Are lags causing issues with dynamic schedule behavior? Are these lags justifiable?
➢ Is their rationale for setting these lags, either positive or negative, to zero duration?
✓ Constraints
➢ Are hard or soft constraints inhibiting the downstream flows? (Filtering for
constraints may assist in this investigation.) Are the hard constraints justifiable?
Are they causing negative float?
➢ Should special exceptions be made for select soft constraints?
➢ Should special exceptions be made for select “Must Start On” hard constraints
distorting the downstream flow? Are they intentional tools that align work with the
budget? Could they be expressed in another way?
5. Craft a Resolution Plan for All Potential Issues and Record It Within the Schedule Basis of
Estimate
✓ After all investigative efforts have been completed and documented within the schedule
BoE, the P/S should work with Technical Leads to derive a resolution plan for all
remaining issues. The plan should include remediation tasks assigned to the
appropriate Technical Leads, durations for remediation tasks, goal targets, a means for
recording progress, and a status reporting cadence. The P/S should take great care that
the plan is housed within the BoE and tracked over time. For major structural issues,
the resolution plan should entail execution of a corrective action.
6. Repeat this and 1st Tier Assessment Procedures (as necessary), especially the Schedule Health
Check
176
✓ If a large quantity of schedules issues is uncovered and unresolved by dispensatory
justification, rerun the Schedule Health Check procedure and resolve the resultant issues
before reengaging the Critical Path (and Driving Path) and Structural Check procedure.
As shown in Figure 6-11, and as with all assessment activities, the Shock Test will ultimately result in an
update to the issue remediation plan initially rendered by the performance of each 1st Tier assessment
iteration. In fact, at this juncture, as suggested above, the completion of the first Shock Test may
necessitate immediate 1st Tier procedure set revisitation if the results are well below nominal.
Figure 6-11. Illustration of the Increasing Task Durations and the Potential Effects During the Shock Test.
6.2.2.2.1.2 Step 2. Understand and Assess the Deterministic Critical Paths (and Driving Paths)
After the eventual success of the Shock Test, the structural integrity of the IMS can be mostly affirmed.
This naturally enables more focused, prioritized look at the most important part of the schedule, the set
of task-to-milestone logical strings comprising the critical paths (and driving paths), both deterministic
and stochastic. The P/S should approach deterministic critical paths via three activities: identifying the
critical paths and driving paths, understanding them within the largest P/p context, and, similar to other
assessment procedures, scrutinizing them in the context of Agency and industry standards.
Understand the Critical Paths (and Driving Paths)
In addition to accurately identifying and verifying the critical path and driving path, it is also important
for the P/s to understand those paths. The P/S should pursue the following sets of questions (at a
minimum) and related issues when attempting to understand the critical paths:
177
➢ Given the P/p’s requirements and contextual information, does the critical path, as well as near-
critical paths and driving paths embedded within the IMS make sense? Is it misrepresented? Is
there a lag between reality and what is captured in the critical path set? Is there content that
P/p management is treating as schedule critical that does not appear within the critical path?
➢ Does the CP task sequence pass the common-sense test? Does it match the work-flow diagrams
that were used to develop the IMS? Should the sequencing be modified to align better with P/p
realities?
➢ Are the critical path, near-critical, paths and driving paths identified in the IMS the same as
those profiled by P/p leadership? (Hint: Sometimes there are differences.) See Figure 6-12 for
an example of a typical critical path report that may not actually capture the real critical path as
measured within the IMS.
➢ Does status of accomplished work for activities within the critical path set indicate schedule
delays? Do other analyses support this indication? (See the analysis section at the end of this
chapter.)
➢ Is the amount of float correct and justifiable? Does it make common sense? (Section 6.3.)
➢ Could additional shifts accelerate any critical path on time? What is the technical justification?
Can work otherwise be performed in parallel to accelerate progress on any critical path?
Figure 6-12. An example of the primary, secondary, and tertiary critical paths as reported on a Summary Schedule, which should
be verified for accuracy.
178
This list of inquiry threads, though not exhaustive, should lead the diligent and informed P/S, having
performed all the 1st Tier procedures and the Shock Test at least once, to uncover all salient aspects of
the critical path set. As always, all findings associated with the critical path (and driving paths), including
the answers to the above and related questions, should be documented within the schedule BoE.
Assess the Critical Paths (and Driving Paths)
Once the P/S understands the critical paths, as well as the near-critical or driving path set, it is
appropriate to perform a more focused assessment in the continuing spirit of the Schedule Health Check
and Shock Test. The aim of this pass is for the P/S to further identify potential issues and to recommend
improvements to the critical paths’ (and driving paths’) construction, even if the IMS is structurally
sound according to the Shock Test; there may be certain P/p truths, dynamics, and hard realities not yet
captured.
The following sets of assessment areas and exploratory questions necessarily and by design resemble
the same general classes examined by the Schedule Health Check and Shock Test (and other procedures)
but relate more specifically to the critical paths, as described below.
Level of Detail and Interdependencies
The level of detail contained within the critical paths (and, by extension, other areas within the IMS) is
bounded by two conditions: inadequate detail that omits key schedule information and precludes
proper dynamic behavior; and an unreasonably heavy level of detail that prevents tractability of the
critical paths and IMS from a basic data management and analysis standpoint. The P/S should therefore
pursue these and related questions:
➢ Does the level of detail reflected within the critical path, near-critical paths, and driving paths
help the IMS to achieve its full informational potential? Do these paths maintain a standard of
tractability that supports insight-generation while also preserving meaningful fidelity to the rich
set of schedule data that drives them?
➢ Is the level of detail adequate so that task interface points can be identified to support accurate
interdependency assignments?
➢ Do the critical path, near-critical paths, and driving paths support traceability by including task
durations that reasonably short, meaningful, and allow for discrete progress measurement?
The GAO standard regarding the phenomenon of inadequate level of detail contained within
the critical paths and IMS as a whole is simple:
The detail should be sufficient to identify the longest path of activities through the
entire program.90
➢ Is the critical path, as well as the near-critical and driving path set, sufficiently summarized to
support schedule understandability and intelligibility?
90 GAO-16-89G. GAO Schedule Assessment Guide. Page 11. December 2015. http://www.gao.gov/assets/680/674404.pdf
179
➢ Do the identified critical path, near critical paths, and driving paths reflect an excessive amount
of low-level complexity? Does this betray the utility of the IMS in meeting program
management and policy demands?
➢ Are the critical paths so heavy with detail that they require the schedule team to maintain
supplementary schedules that are treated as authoritative?
The GAO position on the phenomenon of intractable level of detail contained within the
critical paths and IMS as a whole is clear:
The schedule should not be so detailed as to interfere with its use.91
Risks
➢ Are the critical paths constructed in a way that supports the identification and application of
risks? Should its resolution increase to support proper risk placement?
➢ Are there risks missing from the critical paths?
➢ Are there risks misplaced within the IMS’s logic network that should affect the critical paths?
Lags and Gaps
➢ Are there any gaps in time between critical path tasks that cannot be explained?
➢ Does the justification associated with the lags that remain within the critical paths’ logic
valid? The P/S should use special caution when scrutinizing and accepting the justification
for these items.
Float and Margin
➢ Are margin tasks part of the critical path? What is their justification? What is the schedule
owner’s argument against their removal? What is driving it? Is it valid?
➢ Do the structure and flow of the critical paths allow for accurate calculation of float (or
margin, in some programmatic contexts)? Do all critical interdependencies, constraints, and
task lengths support true understanding of float?
➢ Is the critical path’s quality sufficient to support reliable Schedule Risk Analysis and its
measurement of float?
Task Types
➢ Are there LOE tasks embedded within the critical paths? What is the justification for not
removing them?
Regardless, all questions, answers, investigative notes, and related assessment information regarding
the critical path set should be recorded within the schedule BoE. It may also help the P/S, when
91 GAO-16-89G. GAO Schedule Assessment Guide. Page 16. December 2015. http://www.gao.gov/assets/680/674404.pdf
180
coordinating with Technical Leads, to mark the critical paths for further investigation, as shown in the
Figure 6-13 and Figure 6-14. These types of markings should also be included in the schedule BoE.
Figure 6-13. An example of a schedule report marked up by the P/S with feedback for Technical Lead consideration.
181
Figure 6-14. An example of a questions the P/S might consider when assessing the critical (and driving) paths.
Exit Criteria. The Critical Path and Structural Check procedure is satisfied when the P/S has completed
the following tasks:
✓ Perform a Shock Test on the IMS to verify its overall structural integrity via horizontally
traceability and proper dynamic behavior.
✓ Identify, understand, and assess the deterministic critical paths.
✓ Mark the BoE with each data source’s date according to the current schedule and assessment
iterations. Alongside this, include the next steps of schedule evolution and a corrective action
recommendation when organic schedule improvement is not sufficient.
Procedure Maturation. This procedure, as suggested above, should be performed after the 1st Tier
procedures begin to render favorable results. In anticipation of analytical products that support
management decisions and policy requirements, the structural integrity of the IMS should be maximized
prior to SDR, so the Critical Path and Structural Check should be iterated as necessary. Additionally, the
check should be executed on a regular basis after SDR since the P/p’s critical paths will change over time
and should be measured for performance. Regardless of the moment in the life cycle during which this
check is performed, the types of assessments will remain consistent, with the amount of content
pursued increasing over time until late in the life cycle. As the P/p progresses, the narrative within the
schedule BoE should be augmented such that, at P/p completion, it contains the full narrative of critical
path and structural evolution of the IMS from inception to retirement.
182
6.2.2.2.2 Procedure 5. Basis Check
It is a best practice for Basis Checks to be routinely performed to affirm the quality of the estimate
associated with each discrete schedule element.
P/Ss, having made iterative attempts to affirm the scope, health, risk posture, and structural integrity of
the IMS may now turn their attention towards understanding the DNA or basis of a schedule estimate.
A schedule is simply an estimate forecast extending into the future and, as such, is nothing more than a
collection of estimated elements each with a basis rationale. The basis for each of these schedule
elements is extraordinarily important for measuring schedule reliability; the IMS, even having
satisfactorily passed the preceding and succeeding assessment procedures, has very little meaning if
strong rationale is not associated with all elements.
The role of the P/S herein is to measure the realism associated with each schedule element and, in turn,
the amalgamation of the elements into an integrated whole. This approach is more nuanced than it may
appear; the P/S’s priority should be not to use the measure to apply a strict grade against the IMS but,
rather to help the P/p improve the schedule’s reliability by enhancing its realism as much as possible.
This is done by providing an evolutionary path of progressive improvement over time. Best among the
various strategies for assessing and evolving the collection of schedule basis rationale is to establish
priorities for each procedure iteration, instead of attempting to assess all schedule element at once. In
this way, focus areas such as the critical paths can take precedence and receive repeated scrutiny
alongside new areas of interest with each successive assessment cycle.
The progressive development of robust schedule basis rationale through iterative performance of the
Basis Check supports continuous Schedule Management because, to a degree deeper than the other
assessment procedures, it strikes at the foundational truths of a P/p’s activity at its schedule’s deepest
levels. Ultimately, a clear capture of realism should be the paramount goal of the P/S.
For each schedule element, the P/S should record within the schedule BoE the quality of source data
and the method that uses that data to derive each of the element’s parameters, such as duration,
linkages, constraints, mapped risks, and any other element characteristic. The burden of proof is on P/p
personnel, programmatic and technical, to defend the quality of their choices as manifested in the basis
183
rationale for each element. In every case, the P/S should ensure that every element’s basis rationale is
fully and clearly traceable to underlying source material such as tools, techniques explanations,
estimating methodology, supporting data, all of which should be included within the BoE.
The assessment of basis rationale is a considerably demanding effort that cannot be completed at once,
especially when the P/p and its schedule are undergoing peak maturation. As such, the P/S should
prioritize areas of interest (perhaps, for example, beginning with top schedule risks or the primary
critical path) and progress deliberately over Basis Check iterations in concert with the other assessment
procedures.
For any of the schedule estimating techniques, as described in Section 5.5.9.3, the P/S should judge
methods and data used, marking shortcomings within the BoE and suggesting a plan for the
incorporation of alternative estimating approaches as appropriate. In some extreme cases, a corrective
action may be necessary to facilitate changes to the IMS if the new estimating method or data sources
warrant.
Exit Criteria. The Basis Check procedure is satisfied when the P/S has completed the following tasks:
✓ Prepare the schedule basis of estimate to capture for element basis rationale.
✓ Assess each schedule element’s basis rationale.
✓ Mark the BoE with each data source’s date according to the current schedule and
assessment iterations. Alongside this, include the next steps of schedule evolution and a
corrective action recommendation when organic schedule improvement is not sufficient.
Procedure Maturation. Assessment of schedule elements’ basis rationale as captured within the BoE
should be performed routinely throughout the P/p’s life cycle, similar to other assessment procedures.
It is important for this particular assessment campaign to be launched during Schedule Development, as
detailed in Section 5.5.9, since it may have the most influence on schedule element parameters as they
are being derived for the first time. Once a schedule is baselined, the P/S should consider the Basis
Check to command less but still considerable influence on schedule evolution.
Performance assessment of schedule element parameters is critical in completing the narrative of the
IMS’s evolution and supporting earnest reporting to management. This check should serve as the
foundation for task duration growth and slippage assessment over time, as well as schedule risk
mitigation performance assessment. Chapter 7 contains a thorough discussion of performance
assessment and its role in Schedule Management.
184
6.2.2.2.3 Procedure 6. Resource Integration Assessment
It is a best practice for Resource Integration Check to be routinely performed to affirm that the P/p’s
budget, workforce, and cost estimates map to the P/p IMS.
The P/S, in support of ICSRA and the higher tier assessments that it fuels, should perform a simple but
important assessment procedure pertaining to the meld between the P/p’s financial and schedule
domains. The P/p’s resource suite, comprised of budget streams, workforce profiles, allocated UFE, and
related elements, should reflect self-consistent time phasing. Once the P/S has verified this by
inspection, he or she must affirm through deeper investigation that these elements map cleanly to the
IMS, demonstrate clear mutual traceability, and tie to the same P/p snapshot in time.
Though both flow from P/p plans and requirements, the financial and schedule complexions may evolve
independently, especially when content is added or deleted from the technical portfolio. The P/p must
document within its BoE evidence supporting the robustness of the linkage amongst these
programmatic components and that change, from, for example, a schedule performance issue, does not
perturb their alignment. The P/S must evaluate and record the strength of this linkage within the BoE.
(The assessment of time-phased budget adequacy given the IMS temporal work flow is not part of this
assessment procedure; refer to the ICSRA in Section 6.3.2.4 for more information.)
Exit Criteria. The Resource Integration Check procedure is satisfied when the P/S has completed the
following tasks:
✓ Verify the robust linkage between the P/p’s financial and schedule domains.
✓ Mark the BoE with each data source’s date according to the current schedule and
assessment iterations. Alongside this, include the next steps of schedule evolution and a
corrective action recommendation when organic schedule improvement is not sufficient.
Procedure Maturation. As the P/p proceeds through formulation, it may take dedicated attention for
its PP&C office to align the cost and schedule domains. Regardless, this effort should be completed
prior to the initial baseline. Thereafter, the P/S should continuously check the status of the P/p’s given
resources, curated cost estimates, and their explicit mapping to the IMS by programmatic personnel.
185
✓ Complete all assessment procedures in tier order at least once (though it is unlikely that a linear
path through these activities is possible).
✓ Mark the BoE with each data source’s date according to the current schedule and assessment
iterations. Alongside this, include the next steps of schedule evolution and a corrective action
recommendation when organic schedule improvement is not sufficient.
The Schedule Assessment sub-function is never fully satisfied due to the interminable string of changes
that will befall the P/p. These will inevitably induce changes that will ripple throughout the IMS.
Additional perspectives that guide explorations related to Schedule Assessment can be gleaned from
Schedule Analysis as described in the following section.
6.3.1 Prerequisites
Schedule Analysis can be initiated when:
• The SMP sub-plan, Schedule Assessment and Analysis Plan, which specifies the analysis methods
performed to understand the projected future performance of the planned schedule given the
associated uncertainty and risk, is available
• Analysis ground-rules and assumptions are available
• Pre-planned initiating event or special request has occurred
• If SRA:
o An IMS or an Analysis Schedule is available
186
o Task duration uncertainties are available
o A discrete risk list is available, current, and complete, and there is clarity regarding risk
mitigation planning and funding
o If EVM is being performed, EVM reports are available
• If ICSRA, all the above plus:
o A cost estimate is available
o Cost uncertainties are available
o Cost risks, including liens, encumbrances, threats, and sunk costs are available
The SRA utilizes the IMS, or the Analysis Schedule if available, as its foundation and captures both
uncertainties and discrete risks. The SRA analyzes the potential impact of uncertainties and risks on the
duration of the activities in the IMS and calculates the probability distributions for selected activity
completion dates. Thus, SRAs can help facilitate meeting the NPR 7120.5 and NPD 1000.5 requirements
to produce Schedule Completion Ranges, also referred to as Schedule Range Estimates, and Schedule
Confidence Levels or Joint Cost and Schedule Confidence Levels (JCLs) at the applicable P/p milestones.92
Figure 6-15 illustrates the two types of schedule analyses that are performed: SRA and ICSRA.
92In addition to the NPR 7120.5 and NPR 1000.5 requirements, the NASA Associate Administrator issued a guidance memo on
JCL Requirements Updates on May 24, 2019 for all projects and Single-project Programs with an LCC of $1 billion or more.
https://www.nasa.gov/sites/default/files/atoms/files/jcl_memo_5-24-19tagged.pdf
187
Figure 6-15. The SRA differs from the ICSRA only by the addition of costs and cost risks. In addition to the risk-informed
schedule outputs from an SRA, the ICSRA will also provide risk-informed cost outputs.
As noted in the introduction to this chapter, Schedule Analysis also supports overall schedule reliability
for Schedule Assessment according to the 3rd Tier shown in Figure 6-6.
188
Figure 6-16. Overview of the Schedule Analysis (and Assessment) process.
• Schedule options based on alternative workflows or alternative technical options (e.g., Analysis
of Alternatives)
• The probability of meeting the planned schedule or finishing the P/p on time given the
associated uncertainties and discrete risks (i.e., Schedule Confidence Levels or Completion
Range Estimates)
189
• Uncertainty and risk drivers, as well as risk prioritization for mitigation activities (e.g., Risk
Sensitivity Analysis for Risk Prioritization)
• “Most likely” critical/driving path(s) given associated risks and uncertainties (e.g., Stochastic
Critical Path(s))
• The potential impact of uncertainties and discrete risks on schedule margin (e.g., Allocation and
Sufficiency of Margin)
• Risk-informed forecasting changes over time (e.g., Risk-based Trend Analysis)
ICSRAs involve the addition of a cost model and additional cost uncertainties and risks to the SRA Model.
The ICSRA analyzes the impact of risks on the duration of the activities and the costs loaded in the IMS
and calculates the probability distributions for selected activity completion dates as well as the
probability distributions for the costs of selected activities and/or cost at completion. For space flight
and information technology P/ps, NPD 1000.5 establishes the required cost and schedule JCL expected
for P/p baseline and rebaseline plans. The JCL indicates the probability that P/p costs will be equal to or
less than the targeted cost and that the schedule will be equal to or less than the targeted schedule
date. PMs must establish the necessary processes that enable them to determine if their
implementation plans are compliant with the cost and schedule JCL requirements. The ICSRA facilitates
meeting the JCL requirement by combining a P/p’s cost, schedule, and risk into a fully integrated
management tool. This analysis, and its resulting JCL, helps inform management of the likelihood of a
P/p’s programmatic success.
• The probability of meeting both the planned cost and the planned schedule given the associated
uncertainties and risks (e.g., JCL Analysis)
• The potential impact of uncertainties and discrete risks on funding and UFE (e.g.,
Funding/Reserves Analysis)
The Analysis Execution that details these different analyses is covered in Section 6.3.2.5.
190
1. Be compatible with common scheduling tools used by NASA.
2. Support the loading of duration uncertainty probability distributions for every activity in the
schedule, excluding summary activities.
3. Support the loading of discrete risk likelihoods and duration impact probability distributions for
every activity in the schedule, excluding the summary activities.
4. Support the loading of TD and TI discrete risk likelihoods and impact probability distributions for
every cost parameter in the model.
5. Provide the capability to calculate cost inflation as indexed by year.
6. Support the loading of a correlation factor for every duration probability distribution loaded in
the simulation model.
7. Output the following data:
a. A data sheet that tabulates the duration inputs and outputs for every iteration.
b. A PDF and a CDF data table and plot for every activity in the simulation model.
c. Tornado data tables and charts that correlate risks and uncertainties with activity
durations and ranks them in the tornado chart display.
d. The calculation and display of the percentage of times that an activity was on the critical
path.
If performing an ICSRA, the tool should also:
1. Support loading TD and TI cost values for every activity in the schedule.
2. Support the loading of TD and TI cost uncertainty probability distributions for every cost
parameter in the model.
3. Provide the capability to calculate cost inflation as indexed by year.
4. Support the loading of a correlation factor for every cost probability distribution loaded in the
simulation model.
5. Output the following data:
a. A data sheet that tabulates the cost and schedule pair inputs and outputs for every
iteration.
b. A PDF and a CDF data table and plot for every activity and associated cost in the
simulation model.
c. Tornado data tables and charts that correlate risks and uncertainties with activity
durations and activity costs and ranks them in the tornado chart display.
d. The calculation and display of the percentage of times that an activity was on the critical
path.
e. A scatterplot of that shows the simulated outcomes of the ICSRA, where each dot in the
scatterplot represents a specific result, or scenario, from the simulation calculation.
191
SRA/ICSRA tools often provide additional capabilities that are quite useful, although not always
required. These additional capabilities should be considered when making a selection, as they may aid
in P/p decision making. NASA has an Agency license agreement for two tools – JACS and Polaris – which
are both available on the ONCE database.93 It is a recommended practice that all SRAs and ICSRAs be
performed on one of the NASA-provided tool platforms. The capabilities of these tools, along with other
commonly-used NASA tools, are captured in the NASA Schedule Management Tools Matrix.94
The two most important things to consider about the data collected for the analysis are:
1. Consistency. The data must be consistent. If the IMS/Analysis Schedule, risk list, and cost data
are inconsistent with each other, then it will be difficult for the Schedule Analyst to map
programmatic inputs to the model. Inconsistencies can be found when using different WBSs for
cost and schedule data, if the level of detail represented in the Analysis Schedule does not allow
for accurate representation of the risks, or if there is a lack of traceability between other cost
and schedule inputs. If the EVM data is to be used to estimate uncertainties, then it must be
consistent with the cost and the schedule inputs. What is meant here is that using performance
data for uncertainty estimates for future performance needs to be carefully done with respect
to the integrated baseline, or PMB. Sometimes the baseline will be updated to adjust for a
history of low performance as reported through the EVM. When that occurs, the EVM
performance data needs adjusting to account for the changes in the baseline before future
uncertainty estimates are developed for the SRA Models. As simple as this sounds, it is common
for the Schedule Analyst to be provided inconsistent data sets that must be reconciled.
2. Configuration Control. Maintaining consistency between collected data sets and the iterations
of the SRA Model is the responsibility of the Schedule Analyst. The Monte Carlo simulation tool
used for the analysis is simply a mathematical machine that operates on the inputs to produce
outputs. The wisdom of the analysis comes from the thoughtful manipulation of the inputs and
observance of the outputs to determine sensitivities of the P/p schedule to the inputs (e.g.,
uncertainties and risks). Hence, careful configuration control of the inputs and the SRA Model
version is critical for traceability to the outputs.
6.3.2.2.2 Procedure 2. Collect Uncertainty and Risk Data and Ensure Suitability for Analysis
NPR 8000.4 describes risk as being concerned with uncertainty about future outcomes. Risk is the
potential for shortfalls with respect to achieving explicitly established and stated objectives. As applied
to P/ps, these objectives are translated into performance requirements, which may be related to
institutional support for mission execution or related to any one or more of the following domains:
• Safety
• Mission Success (Technical)
• Cost
• Schedule
As such, uncertainties and risks affecting the schedule can arise from several sources, including:
• Lack of a realistic schedule developed to a level of detail that accurately reflects how the work
will be done, with fully developed work scopes and sequential logic. Schedules should not
simply be devices intended to reassure everyone that the P/p will be completed on time.
• Inherent uncertainty of the work arising from advanced technology, design and manufacturing
challenges, and external factors including labor relations, changing regulatory environment, and
weather
• Complexity of P/ps, which requires coordination of many contractors, suppliers, government
entities, etc.
• Estimates prepared in early stages of a P/p with inadequate definition of the work to be
performed, and inaccuracies or optimistic bias in estimating activity durations
• Over-use of directed (constraint) dates, perhaps in response to competitive pressures to
develop aggressive, unrealistic schedules
• P/p management strategies favoring late starts (“just-in-time”) scheduling or “fast track”
implementation
193
• Lack of adequate float or management reserve95
For the purposes of this handbook, and specifically for the SRA/ICSRA processes, the focus of all
uncertainty and risk discussion will pertain to those affecting programmatics. In alignment with NASA’s
other PP&C guidance documents (e.g., NASA Cost Estimating Handbook, etc.), programmatic risk and
uncertainty are defined as follows:
• Uncertainty. Uncertainty is the indefiniteness about a P/p’s baseline plan. It represents the
fundamental inability to perfectly predict the outcome of a future event. Uncertainty is
characterized by a probability distribution, which is based on a combination of the prior
experience of the assessor and historical data.96
• Risk. Risk is the combination of the likelihood and the consequence(s) of a future undesired
event or scenario occurring. Uncertainties are included in evaluation of likelihood (probability of
occurrence) and consequence.97
The risk identification step should be robust enough that it fully characterizes the risk and its impacts.
At a minimum, the Schedule Analyst should be able to:
• Include all risks. It is common for P/ps to capture only the top x number of risks, or perhaps just
red and yellow risks. However, for a successful and informative SRA (or ICSRA), including all
identified risks provides a more holistic view and may provide insight on where clusters of risks
95 Hulett, D. T. “Project Schedule Risk Assessment.” PMI. Project Management Journal, 26(1). Pages 21-31.
https://www.pmi.org/learning/library/project-schedule-risk-assessment-2034
96 NPR 8000.4 defines uncertainty as, “An imperfect state of knowledge or a variability resulting from a variety of factors
including, but not limited to, lack of knowledge, applicability of information, physical variation, randomness or stochastic
behavior, indeterminacy, judgment, and approximation.”
97 NPR 8000.4 defines risk as, “Risk is the potential for shortfalls with respect to achieving explicitly established and stated
objectives. As applied to P/ps, these objectives are translated into performance requirements, which may be related to mission
execution domains (safety, mission success, cost, and schedule) or institutional support for mission execution.”
194
may impact the schedule. In addition, Agency resources such as historical CADRe data capture
historical P/p risks that may be helpful in ensuring a more complete risk list.
• Qualify all risks. All risks, including green risks, should be individually qualified (i.e., LxC
categorical ratings that produce the 5x5 risk matrix).
Not only should the risk entries include an assessment against the P/p rating criteria, but P/p should
identify the mapping between the risks and the tasks within the IMS/Analysis Schedule, describing how
each will impact the P/p’s plan. It is important to note that the P/p is accountable for risks that impact
the schedule but may be outside P/p control (e.g., the risk of an international contribution not coming in
on the scheduled date). Depending on the robustness of the P/p’s Risk Management process, the
Schedule Analyst may also be able to:
• Quantify all risks. All risks, including green risks, should be individually quantified (likelihood of
occurrence, schedule impact, and cost impact).
• Map all risks. All risks should be mapped to activities within the schedule.
If the P/p has not already quantified or mapped the risks to the IMS, these steps will be addressed in
Section 6.3.2.3.4. Once the Schedule Analyst is assured that the risks are of a quality suitable for the
analysis, the risk data, usually in the form of an MS Excel spreadsheet, is exported from the database.
Typical data exported includes the fields as shown in Figure 6-17.
Risk ID Title Risk Statement Context Owner Likelihood Consequence Risk Mitigation Plan Schedule UID
1 Widget Delay Given that the widget is a The framis pin for the Sam 3 4 Alternate Framis Pin 2074
complex new design, there is a widget is a new technology design in parallel
possibility that unplanned development and may have development
development issues may cause perrformance problems
re-design and lead to late when completed, requiring
delivery into I&T redesign.
2
3
4
5
Figure 6-17. This image illustrates the typical risk data exported to an Excel file.
6.3.2.2.3 Procedure 3. Collect Cost Data and Ensure Suitability for Analysis
The general approach to estimating costs should be consistent with the processes defined in the Cost
Estimating Handbook98. It is important to note that the costs will need to be mapped to the schedule,
requiring consistency in the WBS used for Schedule Development and cost estimating. An important
characteristic of the ICSRA Model is that it will “stretch and/or shrink” the durations of activities in the
schedule that are exposed to risk impacts. Using this principle to guide how the ICSRA is built, means
that more fidelity is required for areas of the schedule exposed to risk and less in other areas. It is also
important to separate TD costs (e.g. labor, facility utilization time) from TI costs (e.g., travel, supplies,
component purchases, firm-fixed-price subcontracts, etc.).
In addition, when producing an ICSRA, there generally exists some ambiguity about the differences
between cost loading, resource loading, and budget loading a schedule. These terms are defined in
Section 5.5.12. Understanding the nuances between these methods is a necessary step in collecting the
right “cost” data to properly develop the ICSRA. The Schedule Analyst will need to work closely with the
Cost Analyst to understand which “costs” are to be applied to the ICSRA Model. Although the
terminology “resource-loaded schedule” is used in the NASA policy, traditional resource loading of the
schedule is not the only option to support the ICSRA requirement. Appendix J of the Cost Estimating
Handbook helps to clarify the policy:
“The policy clearly states that the projects are required to generate a resource-loaded schedule.
This terminology can be confusing and deserves some attention. NASA’s definition of resource
loading is the process of recording resource requirements for a schedule task/activity or a group
of tasks/activities. To many people, the use of ‘resource loading’ implies that the tasks need to
be loaded with specific work or material unit resources. This is NOT the intent of the policy. In
general, the terminology of ‘resource-loaded schedule’ can be used interchangeably with ‘cost-
loaded schedule.’ The intent of the JCL policy is not to recreate the lower level management
responsibilities of understanding and managing specific resources (labor, material, and
facilities), but to instead model the macro tendencies and characteristics of the project. To do
this, cost loading a schedule is sufficient and a resource-loaded schedule is not required.”99
99NASA Cost Estimating Handbook, V4.0. February 2015. Appendix B. Page J-6.
https://www.nasa.gov/sites/default/files/files/01_CEH_Main_Body_02_27_15.pdf
196
In addition to schedule-based performance metrics, captured in Section 7.3.3, the EVMS is an excellent
source of performance data. EVM data can be used to adjust the uncertainties used in the risk
parameters used in the SRA Model. For example, if the Schedule Performance Index (SPI) is .95 and not
improving, the future activity durations should have uncertainty adjustments that increase durations by
5% or more depending on trends and effectiveness of corrective action plans. Other schedule-based
performance metrics and indicators that provide useful information for estimating uncertainty include
BEI, CEI, and HMI. These metrics are detailed in Section 7.3.3.1.3.
Figure 6-18. The figure illustrates the six procedures for construction of the SRA.
197
The schedule network shown in the figure is either the IMS or an Analysis Schedule as determined in the
Collect Schedule Data procedure in Section 6.3.2.2.1. Then, the uncertainties, illustrated in the upper
right of the above figure, are inserted directly into the schedule attached to the relevant activities that
have uncertain durations. The uncertainties are not usually subject to trade studies and mitigation
options analysis and therefore relatively static. Also, their parameters are generally simple. For these
reasons, experience has shown that they are more easily loaded and managed if they are placed directly
in the schedule. The risk data file is then exported from the P/p’s risk management database. Therein,
quantified risk parameters are appended to the relevant risks in the database. The form and format of
the risk parameters must comply with the input needed by the SRA software selected per the Schedule
Management Planning sub-function. The SRA Model, all input documentation, and all results
documentation should be maintained in the same archive location such that they can be revisited at any
time for additional risk sensitivity studies or data extract for trend analyses. The following sections
discuss each procedure.
Nonetheless, there are times when it may be difficult to utilize a P/p IMS as constructed for the SRA
Model. These reasons include but are not limited to: IMS results in undesirable assessment checks or
does not adhere to NASA best practices, IMS is not structurally sound to support risk analysis, IMS has
multiple schedules mapped in a server environment, IMS size/magnitude and level of detail does not
support risk analysis. Thus, when utilizing the IMS is not feasible, it is a recommended practice that an
Analysis Schedule be used to perform the SRA, as needed. Whether using the full IMS or an Analysis
Schedule, it is important to ensure that the schedule has gone through the Assessment checks detailed
in Section 6.2.
Before choosing between the IMS and Analysis Schedule, it is important to be aware of not only the
benefits, but the limitations and consequences of using an Analysis Schedule for SRA. Figure 6-19
summarizes the differences between using either the IMS or Analysis Schedule as the foundation of the
SRA Model. The P/p must carefully balance fidelity, currency, relevance, etc. when making the decision
to use an Analysis Schedule in lieu of the IMS for the SRA.
198
Characteristic Integrated Master Schedule Analysis Schedule
Fidelity ▪ High fidelity; an accurate description of ▪ Low fidelity; float available to absorb
the work actually being performed. risk impacts is often lost, important
detail may be masked.
▪ Summarizing multiple parallel
activities at a higher level may
diminish merge bias.100
Currency ▪ Maintenance requirements will instill a ▪ Can accommodate latest information
lag in the information and changes.
▪ Pending changes will lag due to approval ▪ Changes don’t require the rigor of
process, funding changes and often the CM/DM controls.
contract negotiations.
Visualization ▪ May be too complex to easily visualize ▪ Can be tailored to highlight current
the overall perspective of the P/p. P/p issues and ignore irrelevant
▪ Includes all content from start including activities
completed activities. ▪ Can be truncated to current time to
improve visualization.
Risk Modeling ▪ May be more difficult due to complexity; ▪ Simpler and much easier to visualize.
but likely higher fidelity. ▪ Verification of risk parameters is
▪ Validates all risk drivers with more easier.
granular traceability to discrete risk
impacts to activities and specific float
values.
Relevance ▪ Represents the baseline and is ▪ Onus on Schedule Analyst to verify it
configuration controlled, routinely will fully emulate the IMS and is
maintained, with continuous quality traceable to the IMS.
assessments performed.
▪ It is an accurate representation of all
work to be performed.
Validation of SRA ▪ May be difficult due to complexity but ▪ Typically excludes non-risk-impacted
will eliminate the need to validate the path items, which reduces validation
model. time.
▪ However, without modeling all
scope, may miss potentially
important logic ties.
Maintenance ▪ The IMS is maintained on a routine basis ▪ Additional maintenance required to
by the P/p and should not require any keep Analysis Schedule current and
additional effort to reflect the current relevant.
P/p status.
100 GAO-16-89G. GAO Schedule Assessment Guide. Page 1. December 2015. http://www.gao.gov/assets/680/674404.pdf
199
Logic ▪ The IMS contains all activities and their ▪ Important activity relationships may
logical relationships down to the lowest be lost as the schedule is rolled-up to
level. a higher level.
▪ Logic is routinely assured through the
Assessment Process.
Cost/Resource ▪ Can be very complex, and not easily ▪ Developed primarily to facilitate the
Loading mapped due to potential differences loading of cost/resource data as well
between the cost baseline structure and as risks and uncertainties.
the IMS structure.
Model Validation ▪ Can be very difficult due to size, ▪ Much easier because activities that
complexity, use of constraints other do not contribute value to the
than ASAP, large number of LOE tasks, analysis can be deleted or rolled up
large number of milestones used for into a single summary activity.
visibility, etc. ▪ Completed activities can be deleted
and replaced by a single sunk cost
Activity.
Critical Paths ▪ All critical and near critical paths are ▪ Summarizing a collection of tasks at a
modeled at the lowest level. higher level may mask potential
stochastic critical paths or may even
introduce false critical paths that
would not be critical in the complete
IMS.
Computer Run- ▪ The large size of the IMS can significantly ▪ Smaller size and specific tailoring can
Time increase the runtime for some of the risk keep the run-times manageable.
analysis tools.
Figure 6-19. Comparison of the IMS and the Analysis Schedule for use in the SRA.
The appropriate level for the Analysis Schedule is determined based on a balance between the IMS, cost
estimate, and risk list. It is important when creating the Analysis Schedule to avoid making changes that
lose traceability and transparency to the IMS. The Analysis Schedule should be demonstrated to be an
accurate emulator of the IMS in areas important to the risk analysis. The IMS and the Analysis Schedule
should be consistent, showing the same dates for major milestones and showing the same critical
paths.101 The process of translating a P/p IMS into an Analysis Schedule involves the formulation of a
summary or simplified schedule comprised of tasks that mimic and replicate as closely as possible how
the P/p is planned and managed. This formulation process enables a Schedule Analyst to easily construe
a holistic portrait of how uncertainty and risk can impact a P/p’s schedule and cost. Activities in the IMS
that are not risk-prone may be rolled-up to higher-level summary activities in the Analysis Schedule.
Completed activities may be rolled-up as well or captured at a milestone on which costs can be loaded.
Where there are potential risk impacts, the Analysis Schedule should follow the IMS to a low enough
level to capture the activity or activities impacted by the risk.
Some key characteristics when building an Analysis Schedule include102:
101 Best Practices for Analysis Schedule Development. Reed Integration. Page 3. October 31, 2015.
102 Best Practices for Analysis Schedule Development. Reed Integration. Page 5-6. October 31, 2015.
200
• Keep it manageable (e.g., ~100 – 3000 tasks) with working durations (excluding roll
ups/hammocks/LOE) but be comfortable with the level of the Analysis Schedule as compared to
the original IMS.
o Ensure a clear representation of major work flows.
o Pay close attention to the original complexity of the schedule, including serial and parallel
nature of tasks.
o Be careful of the level of detail and resulting merge bias. Rolling up sets of activities on
parallel paths to a summary level may hide or underestimate the potential impacts of
underlying merge bias.
o Reduce redundancy while maintaining traceability of work required that flows into
milestone deliverables and all significant milestones are captured:
▪ Ensure all the P/p milestones are captured.
▪ Do not capture the same milestone multiple times at the subsystem level. The intent is
to capture milestones and work flow once. Many IMSs will have P/p level milestones
captured in each subsystem element to maintain visibility of that milestone, but for an
Analysis Schedule it just creates redundancy. Make sure the work flow from the
subsystem element to the P/p milestone is properly captured.
• Maintain the same working calendar(s) (working time) as the IMS.
• Maintain natural float while avoiding the use of lag to represent float and/or schedule margin.
• Match the level of detail to the tasks impacted by risks. For example, if there are no risks in the
C&DH, roll-up to the summary level; if there are risks in a star tracker, breakdown the GN&C to
the component level.
• There is no need to model the IMS from the beginning of the P/p to the “time now”. The
Analysis Schedule can be modeled without including completed work, as long as the completed
portion of the schedule is clearly associated with sunk costs. Using fiscal year cutoff points tend
to work well due to the requirement to capture cost and schedule data at that time for the PPBE
process.
201
Figure 6-20. Example of an Analysis Schedule.
202
• Model Detail. A potential risk of creating an Analysis Schedule is the increased likelihood of
losing lower-level schedule P/p details like dependencies and constraints crucial to modeling P/p
workflow and replicating reality. The risk of losing schedule detail and workflow attributes
when creating an Analysis Schedule can result in a reduction of programmatic insight and
ultimately impact NASA Agency decision-making. This potential loss of detail can be mitigated
or reduced if proper due diligence is taken to construct a robust Analysis Schedule.
• Merge Bias. Merge bias indicates the complexity of the start of an activity due to having a large
number of predecessor activities. Activities with a large number of predecessors have a higher
probability of being delayed due to the cumulative effect of all links having to complete on time
in order for the activity to start on time. In other words, “merge bias is the impact of having two
or more parallel paths of activities, each with its own variability or uncertainty, merge into one
milestone or other activity. Under most circumstances, the inclusion of parallel tasks in the
schedule will cause the deterministic schedule to be at a low confidence level.”103 In general,
more parallel tasks increases the mean duration (shift right) and reduces the variance of the
statistical distribution.104 In other instances, merge bias may be underestimated when rolling up
multiple parallel paths into a single summary activity, such as when using an Analysis Schedule.
Careful consideration should be exercised when rolling parallel activities into summary activities.
Understanding where merge bias exists is especially important when performing the SRA. These
affected activities should be analyzed for uncertainty/risk impacts since they may become
critical as a result of the large number of merge points.
• Logic and Constraints. Regardless of the approach, the goal of the schedule that provides the
framework for the SRA Model is to understand how a schedule will react to uncertainty and risk
impacts. Logic and constraints can have significant, adverse effects on an SRA. Any constraints
impacting the ability of the activities to move freely based on the logic between activities should
be removed. The goal is not to change schedule logic or constraints to garner desired or positive
results, but to ensure that the SRA Model will accurately capture positive and negative changes
in the schedule due to schedule logic and flow, whether using the IMS or an Analysis Schedule.
• Import Issues. If the SRA tool being utilized requires an import of the schedule file and is not an
add-in to the native scheduling software, it is possible to run into issues. The Schedule Analyst
should perform spot checks of activity logic and finish dates. If the SRA tool has a built-in import
check or health check capability, it should be run, and any inconsistencies resolved before
moving on to the next procedure.
6.3.2.3.2 Procedure 2. Develop and Load the Schedule Duration Uncertainty Parameters
It is a best practice for schedule duration uncertainties to be quantified with respect to appropriate
activities for inclusion in the SRA. This section discusses various methods for selecting and applying
103 Majerowicz, W. and S. Shinn. “Schedule Matters: Understanding the Relationship between Schedule Delays and Costs on
Overruns.” Page 6. 2016. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20160003386.pdf
104 Kuo, Fred. “Everything You Want to Know about Correlation but Were Afraid to Ask.” NASA PM Challenge. 2011.
https://www.nasa.gov/sites/default/files/atoms/files/04_correlation_2016_cost_symposium_fkuo_tagged.pdf
203
schedule uncertainty distributions to the activity durations, including the advantages and disadvantages
of these methods. Figure 6-21 illustrates the concept of schedule and cost uncertainty parameters.
Figure 6-21. This figure illustrates the cost and schedule uncertainties concept.
105 Elliott, D. and C. Hunt. “Cost and Schedule Uncertainty: Analysis of Growth in Support of JCL.” NASA Cost Symposium.
2014.
106 Whitley, Sally. “Schedule Uncertainty Quantification for JCL Analysis.” NASA Cost Symposium. 2014.
204
• Risk Management System. Although uncertainty is independent of risk, how complete and
thorough the risk management system is will affect uncertainty values. From a modeling
perspective, an SRA can be calculated with a cadre of methodologies incorporating discrete risks
and uncertainties. In general, P/ps lean on using their risk management system (which is not
omniscient) to capture discrete risks that are currently being watched and managed, while using
uncertainty to capture both unidentified risks and activity duration uncertainty in the plan.
One distinguishing aspect of uncertainty versus risk is the absence of a likelihood of occurrence for
uncertainty. Thus, some value of uncertainty from the uncertainty distribution will always be applied to
the SRA/ICSRA. Uncertainty can be modeled with a number of probability distributions, such as normal,
lognormal, Weibull, Rayleigh, PERT, or uniform, information on which can be found on the SCoPe
website.107 However, in actual use, uncertainty is typically modeled using a three-point estimate as
shown in Figure 6-22. The low value represents the minimum (Min) extreme of uncertainty around the
duration (or cost), the middle value represents the “most likely” (M/L) value of uncertainty, and the high
value represents the maximum (Max) extreme of uncertainty. It is important to note that the baseline
plan may not be any one of these numbers – Min, M/L, Max – but should be within the range of Min and
Max. In most Monte Carlo simulation tools, uncertainty can be modeled by actual values as shown in
the figure or may be percentage-based such as plus (+) or minus (-) some percentage (%) of the
estimated value.
Figure 6-22. The figure illustrates an example triangular distribution for uncertainty.
At a macro level, there are three methods for selecting cost and schedule uncertainty distributions:
data-driven, performance-based, and SME-based approaches.
• Are there enough data to support the analysis? Sample size matters. Small samples introduce
statistical bias in the estimate. This bias should be considered when deriving uncertainties.110
SERs. Schedule Estimating Relationships (SERs) can be used to estimate durations of schedule events
much like CERs are used to estimate a particular price or cost. An SER is a mathematical relationship
that defines schedule as a function of one or more parameters or factors, which may include technical
parameters as well as parameters for cost. SERs can be used to estimate schedule duration by
connecting an established relationship with one or more independent variables to the duration time of
an event. Existing NASA tools, such as SMART and NICM, contain schedule data and SERs that can be
108 Elliott, D. and C. Hunt. “Cost and Schedule Uncertainty: Analysis of Growth in Support of JCL.” NASA Cost Symposium. 2014.
109 Johnson, J., E. Plumer, M. Blandford, J. McAfee. “One NASA Cost Engineering (ONCE) Database.” NASA Cost Symposium.
2014.
110 Jarvis, W. and P. Oleson. “KDP B Range Estimates: Unbiased Range Estimation.” NASA Cost Symposium. 2014.
206
used effectively, if the purpose and utility of the results are understood by the Schedule Analyst.111
These tools are available on the ONCE Database.112
111 NASA Cost Estimating Handbook, Version 4.0. February 27, 2015. Appendix K. Pages K-1 and K-2.
https://www.nasa.gov/sites/default/files/files/CEH_AppK.pdf
112 SMART and NICM can be found on ONCE, https://oncedata.hq.nasa.gov/
113 https://www.nasa.gov/offices/ocfo/cost_symposium
207
Additional precision can be achieved with a data segmentation strategy and examining the removal of
outliers and anomalies which can skew the results. 114,115
Although SME guidance is sometimes subjective, there are actions that can be taken to ensure that the
analysis is as accurate as possible. The first step in developing SME-based distributions is identifying the
experts who will provide input to the analysis. Experts should be chosen based on their familiarity with
the tasks for which they are providing input. Things to consider when utilizing SME inputs:
• Right Expertise: It’s important to get the right expert solicitation for cost and schedule
uncertainties. For example, a person may be quite the expert in a technical field but may not
have a good understanding of the cost and schedule uncertainties of that field; whereas, a
recent PM, or Center cost estimator, may not be as competent in the technical area but have a
better feel for cost and schedule impacts. SMEs should be able to provide time estimates based
on personal knowledge and/or experience with the same personnel, or from similar P/p work or
specialized training.
• Confirmation Bias: Tendency to search for, or interpret information, in a way that confirms
one's beliefs or hypotheses. For example, an SME on a given P/p may underestimate the
negative uncertainty because they “want” the P/p to succeed.
• Framing Bias: Using a too-narrow approach and description of the situation or issue.
• Hindsight Bias: Inclination to see past events as being predictable.
One final approach for making SME-driven uncertainty estimates as accurate as possible is through
obtaining multiple inputs from which a distribution can be developed.116 117
Once an expert or group of experts has been identified, the Schedule Analyst should take care to
document the name, position, and contact information for each SME. This approach ensures that the
114 Kou, F., K. Cyr, and W. Majerowicz. “Duration Uncertainty Based on Actual Performance Lessons Learned.” NASA Cost
Symposium. 2012.
115 Drexler, J., T. Parkey and C. Blake. “Techniques for Assessing a Project’s Cost and Schedule Performance.” NASA Cost and
208
analysis is traceable should any questions arise later. The next step is to extract their inputs for the
uncertainty distributions. This identification is traditionally done through the evocation of three
potential durations for the task in question: the minimum duration required to complete the task, the
most likely duration required to complete the task, and the maximum duration required to complete the
task. Each of the suggested durations, (Min, M/L, Max), should have associated rationale. These
durations are then incorporated into the risk parameter as a triangular distribution.
209
event.118 Whereas, it is important to watch out for double counting high risk WBS elements or
subsystems that may have low uncertainties, the reverse may also be true – low risk WBS
elements or subsystems may have high uncertainties. Thus, it is important that the Schedule
Analyst compensate for this underestimation through the expansion of the parameters used in
the uncertainty probability distribution to allow for a wider range of potential outcomes.
Figure 6-23 illustrates typical loading options for activity uncertainties for the more popular tools used
within NASA. Other SRA tools or scheduling tools may not support the options discussed herein.
118 Hubbard, D. W. “How to Measure Anything.” John Wiley & Sons, Inc. 2012.
210
Figure 6-23. The figure shows typical options for loading activity uncertainties.
Correlation between activities may be positive or negative. For instance, if the same contractor is
responsible for completing a number of related activities that share the same resources, the tasks may
be positively correlated if the contractor is uniformly efficient or inefficient across all the activities. Task
durations may also be positively correlated if they are related to external factors, such as funding delays,
design changes, bad weather, etc. Task durations may be negatively correlated if the contractor has
limited resources and the efficient accomplishment of some activities hinders the efficient completion of
119 Joint Cost, Schedule, Risk, and Uncertainty Handbook (CSRUH). Naval Center for Cost Analysis. 2014. Page 46.
120 Joint Cost, Schedule, Risk, and Uncertainty Handbook (CSRUH). Naval Center for Cost Analysis. 2014. Page 45.
211
others. Because difficulties (or successes) in overcoming the P/p risks or schedule underestimations are
likely to be systemic to a P/p, modeling correlation is an important aspect of an SRA.121
Correlation is an input parameter in most statistical simulation tools. Correlation values range from zero
to one and indicate whether the relationship between activities is weak (0.0) or strong (1.0). When
possible, data driven approaches to assign correlation between activities (e.g., historical schedule
growth between satellite subsystems) are preferred. Other methods for applying correlation between
tasks include modeling correlation that is found between activity durations when activities are
influenced by a common risk factor. If a data driven approach is not feasible, general industry guidance
suggests that values (ρ) between 0.25 and 0.75 be assigned between activities, as shown in Figure
6-24.122 In keeping with general industry guidance, it is a recommended practice that 0.3 be the default
correlation value.
Medium (same personnel working different component or different personnel working same component) 0.50
Figure 6-24. Examples of the differences between weak, medium, and strong correlation values.
Correlation alters both the mean and standard deviation of the probability distribution outputs, as
shown in Figure 6-25. Correlation also has a different effect whether the activities being modeled are
rolled up at a summary level, and whether they are serial or parallel. Rolling up subtasks implicitly
assumes 100% correlation of the subtasks, when the same uncertainty factors are used. For serial tasks,
higher correlation tends to create a wider spread distribution, often with little to no effect on the mean.
For parallel tasks, increasing correlation reduces the mean duration (shift left) but increases the variance
of the curve.123 Less correlation and more parallel tasks means more schedule risk.
121 Druker, E. “JCL in a Nutshell: Exploring the Math of Joint Cost & Schedule Risk Analysis through Illustrative Examples. 2010.
NASA Cost Symposium.
122 Druker, E. “JCL in a Nutshell: Exploring the Math of Joint Cost & Schedule Risk Analysis through Illustrative Examples. 2010.
https://www.nasa.gov/sites/default/files/atoms/files/04_correlation_2016_cost_symposium_fkuo_tagged.pdf
212
Figure 6-25. Illustration of how correlation values impact analysis results.
It is important to note that functional correlation between elements may already be accounted for in
the schedule. Functional correlation exists when SER factors are used to estimate schedule durations in
multiple WBS elements. For example, if the results of a weight-based SER are used to generate a
thermal control subsystem and a structure subsystem, then both elements will be functionally
correlated. If activities are not functionally correlated, not applying correlation in the SRA Model is the
same as assuming a correlation value of zero. Ignoring correlation produces the most pessimistic results
and can lead to an overstatement of both schedule risk and the effects of merge bias.124 “If correlation
is ignored, the variance at the total levels in the (schedule or cost) estimate will be understated, in many
cases dramatically.”125
It is also possible to not only correlate activity durations with other activity durations, but also activity
durations with costs, activity durations with risks, costs with costs, costs with risks, and risks with risks.
These are generally input options in the statistical simulation tools. For example, the Risk Driver method
discussed in Section 6.3.2.3.4 assigns risks to multiple activities causing the activities to be correlated
since the risk will occur on all activities if turned “on” during the simulation.
124 Druker, E. “JCL in a Nutshell: Exploring the Math of Joint Cost & Schedule Risk Analysis through Illustrative Examples. 2010.
NASA Cost Symposium.
125 Joint Cost, Schedule Risk, and Uncertainty Handbook (CSRUH). Naval Center for Cost Analysis. 2014.
213
• Merge Bias. As noted in Section 6.3.2.2.1, merge bias indicates the complexity of the start of an
activity due to having a large number of predecessor activities. Activities with a large number of
predecessors have a higher probability of being delayed due to the cumulative effect of all links
having to complete on time in order for the activity to start on time. “The number of merging
parallel paths and the level of overlap and degree of correlation between them (lower
correlation between uncertain durations produces greater merge bias) produce an increasing
merge bias impact to the schedule.”126
6.3.2.3.4 Procedure 4. Develop and Load the Discrete Risk Parameters Impacting Schedule
It is a best practice for discrete risks to be assessed and quantified with respect to schedule impacts for
inclusion in the SRA. Each individual discrete risk will be assigned a probability distribution (e.g.,
normal, triangular, lognormal, beta, etc.) that simulates the risk impact. The Schedule Analyst should be
able to trace the risk parameters directly to the RMS and defend the modeling approach to the risk
owner and the P/p management team. If the discrete risks are not already quantified by the P/p, it is a
recommended practice to build the risk parameters directly within the P/p’s formal RMS. Experience
has shown that traceability, defensibility and repeatability are critical in this type of analysis. Working
with the Risk Manager and risk owner to quantify the risks directly in the P/p’s official RMS facilitates
risk discussions, allows for easy changes and updates, and enables configuration control of the modeling
approach by facilitating control of the inputs and the associated outputs of the SRA Model.
126 Joint Cost, Schedule, Risk, and Uncertainty Handbook (CSRUH). Naval Center for Cost Analysis. 2014. Page B-2.
214
Pre-mitigated
Post-mitigated
LXC=4X5 LXC=4X4 LXC=3X3 LXC=1X2
Mitigation Plan
Activity 1
Mitigation Plan
Activity 2
Activity 1 Complete
Mitigation Plan
Activity 3
Activity 2 Complete
Activity 3 Complete,
Risk Closed
T1 T2 T3 T4
AT1 AT2 Time AT 3
Analysis Time - 1
Figure 6-26. Risk Mitigation burn-down plan and values to be used at different analysis times.
Utilizing pre-mitigated risk details. From the Collect Data process described in Section 6.3.2.2, the P/p’s
current risk list should be exported from the RMS and provided to the Schedule Analyst. The risk list
should include all risks, which are, at a minimum, qualified according to the P/p’s risk management
process (e.g., assigned categorical values based on a “Likelihood x Consequence” (LxC) risk matrix).
Developing the risk parameters is captured in Step 2.
Utilizing post-mitigated risk details. As with pre-mitigated risks, capturing the likelihood of occurrence
and the consequence impact for post-mitigated risks follows the same process. The only nuance is that
the parameter values for post-mitigated risk are based on successful mitigation efforts taken to reduce
either the likelihood or the consequence, or both, of the risks, whereas parameter values for pre-
mitigated risks are on the likelihood and consequence of risks in their current state. In order to utilize
post-mitigated risk details, the Schedule Analyst should first ensure that the risk mitigations have been
thoroughly planned and their activities and costs have been loaded into the cost estimate and IMS, if
they have been approved for inclusion and funding is available. If not already incorporated into the
schedule by the P/p, it is a recommended practice to capture each step of the approved P/p-identified
risk mitigation effort as an individual “activity” in the SRA Model and properly map them to the
appropriate, existing schedule activity. Both the TD and TI costs associated with the mitigation effort
should be captured within the mitigation activities. Mitigation activities will have uncertainty around
their estimates. Thus, a range estimate in terms of minimum, most likely, and maximum (Min, M/L,
Max) values should be established for both cost and duration of a mitigation activity. It is reasonable to
expect that a higher level of uncertainty would be applied to mitigation activities than other activities
within the same WBS element, as mitigation of risk contains high degrees of uncertainty of success.
Furthermore, risk mitigation may not completely eliminate the risk. More than likely, risk mitigation will
serve to lower the risk event’s probability or impact to a level acceptable by the PM. Residual risk
represents the likelihood and impact of a risk after mitigation activities are enacted. Assessment of
residual risks provides detail on the effectiveness of risk mitigation.
215
The following tips are provided to assist in which values to use in the risk parameters.
1. The risk management process requires the P/p to take action on EVERY risk, but that action is
not always “Mitigation”. It may be “Accept”, “Research”, “Watch”, “Retire/Close”, “Residual.” It
is not unusual for the P/p to remove “Accept”, “Watch” and “Residual” risks from the active risk
register. Regardless of whether or not a risk is on the active list, it is important to note that all
categories of risk actions may have risk exposures and must be considered for inclusion in the
risk parameters.
2. Risk mitigation is ignored unless there is clear traceability to funding and activities in the
BASELINE plan. If there is no traceability, use pre-mitigated values.
3. The mitigation activities must be rational. Too often a P/p will show a milestone as a risk
mitigation activity, for example an LCR or the completion of a test. Simply holding an LCR does
nothing to reduce a risk. Conducting an LCR may actually confirm the existence of the risk.
Often a test is used as a mitigation activity. Testing does nothing to change the likelihood nor
consequence of a risk; although, testing may reveal to the P/p that the risk has occurred.
“Watching” does not mitigate a risk. Assigning a “tiger team” does not mitigate a risk. Should
any of the risk mitigation activities seem irrational, use the pre-mitigated values.
4. As mentioned in Part 1, there is inherent risk in the mitigation plan itself and it should be
considered just as any other task in the schedule. Both cost risks and schedule risks should be
considered for every mitigation activity.
5. In the diagram in Figure 6-27, the mitigation plan has three activities that reduce the risk from a
4X5 to a 1X2. There is no guarantee that any of the mitigation activities will result in the LxC
increments shown. After each activity, the risk must be reassessed; the P/p cannot blindly
accept the resulting likelihood and consequence values assigned to the mitigation plan. This is
because the mitigation activity may not yield the expected result.
6. Consider the analysis time AT1, where Mitigation Activity 1 has been incorporated into the
baseline plan. The analyst should not use the post-mitigated, residual risk LxC values just
because a mitigation plan exists; it may fail. At AT1 the analyst needs to consider the
effectiveness of the mitigation activity. For example, if the activity is to purchase alternative
widgets with known performance to replace widgets that have some likelihood of not meeting a
performance requirement, then use the post-mitigated value. If the activity is to test the
current widget to see if it meets the performance requirement, then use the pre-mitigated
value.
7. Consider analysis time AT2. The analyst should use the current risk value of 4X4 ONLY and ONLY
IF the P/p has actually reassessed the risk. Otherwise the analyst should use the last value
formally reassessed by the P/p, which in this case, is the initial pre-mitigated value.
8. Consider the analysis time AT3. Often a P/p will “Close” the risk and remove it from
consideration for the analyst by removing the risk from the active risk list. This too, is likely
incorrect because the risk may still have a residual value (i.e., residual risk). The analyst should
use the residual value, again, IF and ONLY IF the P/p has reassessed the risk value to confirm
216
that the likelihood and/or consequence have been reduced. Otherwise revert to the last value
reassessed.
Ultimately, the P/p’s SRA Model may include some risks in pre-mitigated states and some risks in post-
mitigated states, based on whether some risk mitigations have been approved and funded to be
included as part of the P/p plan. It is important to document such assumptions in the BoE.
6.3.2.3.4.2 Step 2. Develop Schedule Risk Parameters
The Schedule Analyst appends the risk parameters with information from the risk file exported from the
RMS. If the P/p has not yet quantified the likelihood of occurrence or consequence impact values of any
of the risks in the RMS, the Schedule Analyst will need to work with the risk owner (e.g., Technical Lead
responsible for the risk) to determine the likelihood of occurrence, along with the impact should the risk
occur. Depending how much information is known about the risks, the same methods used for
constructing the uncertainty parameters – data driven, performance-based, and SME-based – may be
helpful in establishing a basis for the quantified risk values. Finally, the Schedule Analyst verifies the
parameters with the PM. The quantified risk values that are determined for use in the SRA Model
should be communicated to the Risk Manager and documented as part of the BoE.
The risk parameters show exactly how the Schedule Analyst will simulate the risks in the schedule
network. A typical set of risk parameters appended to the risk file exported from the RMS is shown in
Figure 6-27. The last few columns of the appended risk data fields contain the inputs for the stochastic
model and the type of distribution model used. A risk parameter consists of two parts, the likelihood of
occurrence and the impact should the risk occur. The likelihood is captured as a probability (%), while
the quantified impacts are usually captured as three-point estimates (i.e., Min, M/L, Max).
Risk Simulators
Risk Mitigation Risk Model Probability of Distribution
Risk Model Strategy P1 P2 P3 P4
Plan Implementation Occurrence Type
Figure 6-27. Example risk parameters appended to the risk file exported from the RMS.
The likelihood of occurrence is simply a random draw that reflects the likelihood quoted by the risk
owner. The most commonly-used probability distribution used for the likelihood of occurrence is a
Bernoulli distribution. For a likelihood of X% the Bernoulli distribution will return a value of 1.0 X% of
the time and a value of zero otherwise. Figure 6-28 shows a Bernoulli distribution for a probability of
50%, where half the time the risk occurs in the simulation (1.0) and half the time the risk does not occur
in the simulation (0.0).
217
50% of the
samples will
return 0
Bernoulli Distribution
Figure 6-28. The Bernoulli distribution is often used for the likelihood of a risk occurring. Shown here, 50% of the time the
distribution will return zero otherwise it will return 1.
The impact, should the risk occur, is uncertain and is also modeled by a probability distribution function.
Up to four parameters, P1 through P4, are inputs to the probability distribution function used to
describe the impact should the risk occur. The fields needed for the risk and uncertainty parameters
were identified in the Activity Attributes table. Some probability distribution functions use only use two
parameters, some three, and a few use four. There are many other probability distributions available
for modeling. Most are supported by any of the Monte Carlo simulation tools available. Two of the
most commonly-used probability distribution functions are discussed below and shown in Figure 6-29.
• Uniform Distribution. The uniform distribution models the case of “I know it is equally likely to
be anywhere between this and that.” A uniform distribution model is a two-parameter model
with a lower limit and an upper limit. Any value within that range is equally likely to result when
sampled. Uniform distributions are appropriate when minimal information is known, such as
early in a P/p’s life cycle or when a new technology is being developed.
Equally Likely
Most Likely
Optimistic Pessimistic
Figure 6-29. The triangular and the uniform distribution are the most commonly used.
• Triangular Distribution. A triangular distribution is a three-parameter model with the lower end
being the most optimistic or minimum outcome of the risk impact, the middle point is the most
218
likely outcome, and the upper end being the pessimistic or maximum outcome of the risk impact
(i.e., Min, M/L, Max). Triangular distributions are most commonly used because of the ease of
facilitating data from the SMEs. For example, “I know it will be at least this, but won’t exceed
that, and is most likely this.” Triangular distributions are useful when more information is
known about the risk, such as when data is available about a risk or when SMEs can provide
rationale for each risk parameter.
A risk parameter that is typically used for the risk as linked to the P/p network as shown in Figure 6-27
and Figure 6-32 would be a Bernoulli distribution with parameters set for the risk likelihood of 50%
multiplied by the triangular distribution with parameters set that define the risk impact of Min = 4, M/L
= 6, and Max = 8. The Bernoulli distribution will return the value of 1.0, 50% of the time, and the value
of 0.0 otherwise; thus, the triangular distribution has an effective random output from its distribution
for half the samples.
X
Figure 6-30. This figure illustrates the use of the Bernoulli and triangular distribution to model a risk with a likelihood of 50%
and an impact with a triangular distribution of 4 for the Min value, 6 for the M/L value and 8 for the Max value.
Risk Driver Method. An alternate technique for applying uncertainties and risks to the SRA/ICSRA
Model is the Risk Driver method. The Risk Driver method uses the statistical simulation tool’s
uncertainty/risk register to model uncertainties and risks as “risk factors”. With this approach, the risk
factors are estimated as a probability distribution of the optimistic, most likely, and pessimistic (i.e.,
Min, M/L, Max) parameters, expressed as a multiplicative factors (i.e., percentage) of the remaining
activity durations instead of specified risk impact durations. The risk factors are then assigned to
specific activities in the schedule. The discrete risks are modeled using the probabilities as assigned by
the P/p or adjusted by the Schedule Analyst with appropriate rationale, whereas the uncertainties are
modeled at probabilities of 100%. If more than one risk is acting on an activity, the resulting ranges are
the multiplication of the percentages tied to the individual risks. The Risk Driver method is useful when
the discrete risks are higher-level strategic risks rather than tactical or technical risks. These risk factors
can be thought of global risk, or uncertainties, that may apply to a large subset of the tasks. This
method can be utilized if the P/p feels that there are common risks, or uncertainties that affect multiple
tasks. For example, performance risk or uncertainties could be applied to multiple tasks’ durations and
cost due to known past performance issues. Risk factors can be applied utilizing historical data,
performance data, or SME interviews. It may be appropriate to apply both three-point estimate method
and the Risk Driver method within the SRA/ICSRA Model, as long as the Schedule Analyst provides
justification for the modeling technique for each risk. The rationale for using the Risk Driver method for
these types of risks is the differentiation of the identified strategic risk beyond typical uncertainty. Using
219
this method also automatically correlates the activities that are mapped to the same risks, so no
additional correlation factors need to be applied. 127
Cautions When Developing Schedule Risk Parameters
Other important considerations regarding the modeling of schedule risk parameters are as follows:
• Not Including All Risks. It is common for P/ps to capture only the top x number of risks, or
perhaps just red and yellow risks. However, for a successful and informative SRA, including all
identified risks, even the green ones, provides a more holistic picture and may provide insight on
where clusters of risks may impact the schedule. Sometimes a P/p will only include risks on the
deterministic critical path. Doing this assumes that the P/p knows with complete certainty what
the critical path is and limits the insights available through probabilistic analysis. If the P/p loads
all the risks, it may find that a different path is more likely to be critical given the combined
uncertainty and risk impacts affecting the schedule network. At the very least, applying all the
risks will give better insight into the stochastic secondary, tertiary, etc. critical or driving paths.
In addition, Agency resources such as historical CADRe data capture historical P/p risks that may
be helpful in ensuring a more complete risk list.
• Double Counting. To avoid double counting, the Schedule Analyst must take special care to
segregate uncertainty caused by risks being modeled in the simulation from the underlying
uncertainty of the P/p plan. This is of particular concern early in the formulation phase when
parametric models and/or SERs are being used for schedule development and cost estimating.
Parametric models and SERs are based on the data from a large number of previous P/ps which
were exposed to typical risks that P/ps normally face. Once the input risk is ready to be loaded
into the model, it is a recommended practice that the Schedule Analyst perform an “in/out”
determination for each risk that is being considered for inclusion in the SRA. This is done by
reviewing each risk and determining whether it is truly a discrete risk, “in”, or covered by the
uncertainty in the parametric cost analysis or schedule estimate, “out”. It is important that the
“out” risks be retained and deactivated, but not deleted from the risk analysis model. This is
because the determination is somewhat arbitrary and may need to be revisited. Since the
determination is arbitrary and sensitivity analyses should be run to ensure the P/p plan is robust
enough to absorb the uncertainty in the “in/out” decision.
o Inherent Risk. SERs include some inherent schedule risk and uncertainty. The Schedule
Analyst needs to work with the Technical Leads to ensure that the discrete risks reported by
the P/p are not already included in the P/p schedule estimate. An example would be mass
growth or design life. As long as the risk-reported mass growth or design life are within the
range considered in the P/p schedule estimate, it should not be included in the analysis. If
there is any doubt regarding whether or not the estimate includes the possible risk impact,
it is better to include the risk. (See similar rationale in Section 6.3.2.3.1 for Uncertainty
Definitions.)
127 Hulett, D. T. “Integrated Cost-Schedule Risk Analysis using Risk Drivers and Prioritizing Risks.” NASA Cost Symposium. 2009.
220
• Operational/Flight/Mission Risks. It is common to categorize a specific risk as an operational,
flight, or “mission” risk. The P/p will usually exclude this category of risks from an SRA, since
they reflect impacts that are likely to happen beyond the launch date. It is important to
examine risks in this category, because under the principle of “Test As You Fly,” the P/p may
have some exposure to these risks during AI&T that have not been captured in other discrete
risk entries in the risk list.
4. This process also facilitates updates to the schedule. The entire updated schedule can be cut
and paste in the SRA Model over the prior version of the schedule and the risk links can be
remapped using the UIDs.
These problems may not be as significant when using an IMS or Analysis Schedule that is several
hundred lines or less. Some Monte Carlo simulation software packages support loading risks in a
separate Excel file or risk register, which will resolve most of the problems mentioned above. However,
doing so will compromise the ability to test the model as discussed in Section 6.3.2.3.6.
221
to be linked to the tasks impacted by the risk. Figure 6-31 illustrates this procedure. This approach
maintains the integrity of the IMS or Analysis Schedule and facilitates the test procedure.
Figure 6-31. This illustration is an excerpt from an MS Project file showing the technique for appending the risk file and linking
the risks in the schedule.
222
Figure 6-32. On the left side of this figure is how the at-risk activity appears in the P/p schedule network and on the right side is
how the Delay-Activity risk parameter is applied.
• Delayed Start. This approach is similar to the Activity Duration Impact above except that the
risk is attached to the start of the At-risk Activity.
• Delayed Completion. Likewise, for a delayed completion, the At-risk Activity’s duration is not
impacted, but the delivery of the product to the next activity is delayed by the risk.
• Probabilistic Branching. “Branching” occurs when there is more than one possible path. As
shown in Figure 6-33, the Monte Carlo simulation tool randomly selects a “TRUE” for the
likelihood of occurrence, a set of branch activities are executed. The branched activities may
also have at-risk activities within the branch. A typical example is when the P/p is pursuing two
or more alternative solutions and Activity 1 is a test or other selection process with associated
likelihoods of success for either solution used for the switch. In this case both solutions pass
through Activity 2 and 3 but one alternative solution requires additional activities such as
additional software or additional secondary structure.
False
Likelihood Switch Branch Branch
Activity 1 Activity 2
Activity 1 TRUE
Activity 2 Activity 3
Figure 6-33. The figure illustrates how probabilistic branching models are used.
• Conditional Branching. This risk modeling approach is very similar to the probabilistic branch
modeling approach above; however, the path taken depends on the completion of a
predecessor path. Rather than the switch being activated by a random draw, the switch
actuator is replaced with a conditional statement that actuates the switch. IF [Condition] =
TRUE, THEN execute branch activities. An example often experienced is during Integration and
Test (I&T). The “IF” test is for the delivery of the Flight Module X into integration and test
activity. If not delivered, then the branch is to use an Engineering Development Unit for Flight
223
Module X, complete the testing, then when the flight unit is ready, swap out then perform
regression testing.
• Parallel and Serial Risks. Parallel risks are separate, independent risks whose impacts can be
resolved separately. For example, Integration and Test has two risks (1) Module X can fail during
vibration testing and (2) Module Y may fail during vibration testing. Should either or both fail,
they can be redesigned/repaired and re-integrated separately and the one taking the longest
time is the driver. Serial risks are risks whose impacts are cumulative. An example is Module X
has the following two risks during acceptance testing: (1) Module X may not meet a
performance requirement, and (2) Module X may not meet a power consumption requirement.
Risk 1 must be resolved before exposure to risk 2 can occur. These risks will be cumulative.
These two types are illustrated in Figure 6-34.
224
6.3.2.3.6 Procedure 6. Test and Verify the Discrete Schedule Risk Inputs
It is a best practice for the discrete schedule risk inputs to the SRA Model to be reviewed to ensure that
they are captured and calculating correctly in the SRA Model prior to running the simulation, and that
they represent the intended model of the schedule risk. Specifically, the discrete risks inputs should be
tested to make sure that the schedule network responds appropriately. The Schedule Analyst can check
to see how the risks are calculating by arbitrarily changing the duration of the risk, assuming the risk is
on the critical path. The risk is initially loaded as a milestone with a duration of zero days. The Monte
Carlo simulation tool draws samples from the risk parameters that are loaded for that risk and then
changes the duration appropriately.
To test the calculation, arbitrarily change the duration of the risk and observe the dynamics of the
schedule. For example, if the At-risk Activity has 10 days of total float, then changing the risk to have a
10-day duration should not change the completion date, but it would make the activity become critical.
Every increment added to the risk beyond that point should also move the completion date by an equal
amount of days.
Should this not be the case and negative float is incurred for the At-risk Activity, then there is a “hard”
constraint downstream of the activity that must be removed. This constraint can usually be found by
searching the constraint field for the following constraints: Must Finish On, Finish No Later Than, Must
Start On, or Start No Later Than. When found, change the constraint to the appropriate constraint, such
as “ASAP” or “Start No Earlier Than”. Figure 6-35 illustrates the test.
At-risk Activity
Successor
Activity
2. Observe behavior
Successors rational?
Negative float?
Hard Constraints?
Launch date rational?
1. Change duration,
~20-50 days
Figure 6-35. Test the risk parameters by simulating risk impact with the change of a risk duration. Observe schedule response.
225
Once the risk inputs have been tested and the SRA Model is in working order, the Schedule Analyst can
either integrate costs to create an ICSRA Model or begin executing the analysis. The following Section
6.3.2.4 describes the Build the ICSRA Model process. Section 6.3.2.5 describes the Analysis Execution
Process.
This handbook assumes that the SRA Model exists as an entrance criterion for building the ICSRA Model.
Therefore, this process is discussed hereinafter as an appendage to the SRA. Figure 6-36 below,
illustrates the Build the ICSRA Model process. It consists of eight procedures. Procedure 1 sets up the
ICSRA Model to be able to incorporate the costs, as well as the cost uncertainties and cost risks and
execute the risk analysis. Procedure 2 defines and formats the costs to be loaded into the ICSRA Model.
Procedure 3 maps and loads the costs. Procedure 4 loads the cost uncertainty parameters from the BoE.
Procedure 5 applies any necessary correlation factors to the costs. Procedure 6 defines and loads the
discrete cost risk parameters based on input from the RMS. Procedure 7 maps the discrete cost risks to
relevant activities, and Procedure 8 tests and verifies that the model is calculating as expected. Note:
The examples discussed in this process and all its procedures are specific to MS Project.
226
Figure 6-36. The figure illustrates the six procedures for construction of the ICSRA.
The schedule network shown in the figure is the fully loaded SRA Model as described in Section 6.3.2.3.
Hammock tasks may need to be appended to the schedule to enable the mapping of costs, thus
beginning the transition of the SRA Model to the ICSRA Model. The hammocks are linked into the
appropriate driver activities in the ICSRA Model. Cost estimates are acquired for each of the hammock
tasks. This is done by exporting the assigned resources from the Resource Sheet (MS Project) into an MS
Excel workbook and providing to the Cost Analyst to fill in the cost data for the resources as listed. Costs
are then mapped to appropriate activities or hammocks in the schedule. Additional cost data and
formulas to calculate the uncertainty parameters for any relevant costs are included. The uncertainties
227
are not usually subject to trade studies and mitigation options analysis and therefore relatively static.
Also, their parameters are generally simple. For these reasons, experience has shown that they are
more easily loaded and managed if they are placed directly into the schedule versus through an
interface sheet. The fields needed for the risk and uncertainty parameters were identified in the Activity
Attributes Table. Correlation is applied to costs, as appropriate. The risk data file is then exported from
the P/p’s risk management database. Therein, quantified risk parameters are appended to the relevant
risks in the database. The form and format of the risk parameters must comply with the input needed
by the ICSRA software selected per the Schedule Management Planning sub-function. Then, the cost
risks are loaded into the ICSRA Model and mapped to relevant activities. The ICSRA Model, all input
documentation, and all results documentation should be maintained in the same archive location such
that they can be revisited at any time for additional risk sensitivity studies or data extract for trend
analyses. The following sections discuss each procedure.
Since traditional resource-loaded schedules are a rarity within NASA, the handbook focuses on the
development of cost hammocks and cost loading the schedule to produce the ICSRA Model. Thus,
modeling techniques are needed that will allocate cost input, as well as cost uncertainty and cost risk
parameters to the work activities in the schedule. Furthermore, a typical cost estimate WBS may not
map one-for-one to a schedule WBS. Even if the WBSs were the same, levels of detail may vary
between the cost estimate data and the schedule. “Hammock” tasks can be added to the ICSRA Model
when costs are to be mapped to the schedule, but the schedule is significantly more detailed than the
available cost data or the required fidelity of the cost modeling.
The hammock modeling approach aims to focus on a logical section of the schedule that contains a
series of activities that are all sequentially linked via finish-to-start relationships. An example would be
for a set of many activities that are not exposed to schedule risks and may therefore be summarized at a
very high level. As illustrated in Figure 6-37, the hammock activity is an activity that is linked to the start
date of the first activity in this sequence and to the end date of the last activity—the name comes from
this anchoring to the first and last activities, which is analogous to two trees that anchor a hammock.
The cost hammock is linked start-to-start (SS) to Activity 1 and Activity 2 is linked finish-to-finish (FF) to
the cost hammock. Consequently, when the simulation runs, the cost hammock dynamically expands,
or contracts allowing the costs to increase or decrease depending on the behavior of the tasks under it.
228
Cost Hammock
Activity 4 Activity 5
Activity 1
Activity 2 Activity 3
Figure 6-37. The general concept of a hammock task is illustrated in this figure.
Figure 6-38. The figure illustrates two cost hammocks appended to a schedule foe a design and build activity.
• Case 1: Design Hammock. The hammock is valid if A, B, and C do not finish early due to risk or
uncertainty. Early finishes for those activities will not shrink the design hammock activity and
therefore the cost savings due to early finish cannot be captured. If A finishes early, C will still
drive the end date of the hammock, and vice versa. Exception: activity A and C finish the same
number of days early and the hammock will make the correct calculation. Should there be no
risks applied to activities A, B, or C and the uncertainties are similar, the hammock activity, with
a composite uncertainty, is a good approximation. Also, just because A, B, or C have the
potential to finish early and the savings are not accrued does not necessarily mean that the
Schedule Analyst needs to further refine the model. The error may be so small that the
additional fidelity is not worth the effort to further decompose the model.
229
• Case 2: Build Hammock. Early finishes, as discussed in Case 1, invalidate the hammock. In
addition, for Case 2, activity G has 4 days of total float. Should activity G run long due to risk or
uncertainty, the additional costs incurred are not captured by the hammock until the 4 days of
float are used up. This case may or may not be a negligible impact.
The Schedule Analyst will always be challenged with making decisions of fidelity versus model
complexity. Should either of these cases produce an undesirable error, the hammock must be
abandoned and subdivided into two or more hammocks. Should the cost estimate not be available at
that lower level, the Schedule Analyst must either ratio the costs based on assumed complexity ratio or
simply ignore the error. When such decisions are made, the Schedule Analyst must document the
decision rationale/assumptions in the BoE.
Figure 6-39. Sample ICSRA showing the risk list and the cost model appended at the bottom of the IMS.
Example: Flight Instrument Cost-loading Model. Figure 6-40 shows an expanded view of the cost
modeling from the project example shown in Figure 6-39 and includes several important considerations
(i.e., cautions) regarding the simulation approach. The analysis took place in spring of 2012 and it was a
straight forward forecast of how the next five months would go for FY12. Thus, the FY12 cost model
was simply the FY12 estimate adjusted by the cost performance index and was loaded as a TI cost that
was evenly spread over FY12. This FY12 approach was repeated for all other components except for
those exposed to a risk that may occur in those next 5 months. Those cases were modeled as time
dependent with a risk assigned. The Flight Instrument activity after FY12, was exposed to several risks
230
and a hammock was built to accommodate the cost model variability due to risks and uncertainties. The
hammock start date is the beginning of FY13 via a link from the FY12 fixed model and ends where the
instrument is completely integrated and delivered to System Integration and Test (I&T), modeled by a
link back up into the IMS where that event occurs. Thus, in the body of the IMS, as the Flight Instrument
experiences delays due to risk and uncertainty and is late for delivery into I&T, the cost hammock is
“stretched” via that link and will accumulate the additional costs for the delay.
Point of reference
Date analysis performed
Sunk Costs
Adjusted, but TI
Hammock, TD
Hammock start
Hammock end
Figure 6-40. This figure shows the cost model for the Flight Instrument activity along with other important notes.
• Point of Reference. For any programmatic analysis, the P/p must establish a Point of Reference.
In Figure 6-40, the schedule Point of Reference is the beginning of fiscal year 2012 (see the left-
most dashed vertical line on the schedule) and corresponds to “point estimate” for cost. It is
critically important that the P/p can provide a complete and consistent set of data inputs for the
analysis; technical, cost, schedule and risks must all be to the same reference point. For
example, the cost data cannot contain the cost for a contract change order whose schedule
impact is not contained within the IMS, or Analysis Schedule, if used. This consistency is
absolutely necessary to be able to confirm the cost inputs and verify their performance. It is
acceptable to use post-processing or perform sensitivity analyses to show the impacts of
proposed changes. However, the cost model must have a firm and consistent BoE that is
formally documented. This specific project chose the beginning of fiscal year 2012 (FY12) as the
Point of Reference because the financial system supported an accurate accounting of all costs
and a calculation of the sunk costs to that point. Therefore, all costs to FY12 are considered
fixed and the variable cost models are from that point on.
231
• Date of Analysis. For this project, the analysis was conducted in the spring of 2012. It is
important to have the Point of Reference as close as possible to the analysis date to minimize
the impacts of changes from the Point of Reference date to the analysis date.
• Sunk Costs. For the Payload subsystem, all sunk costs are modeled as a TI cost spread over
FY11. Likewise, but not shown in the figure, each subsystem of the project had a similar sunk
cost model. Sunk costs were distributed to the subsystems to facilitate the verification by being
able to check that the component cost models would sum to the subsystem level.
• Cost Loading. A risk-informed, “resource-loaded” SRA model forms the ICSRA. Care should be
taken when considering the approach necessary to assign and allocate resources that will be
integrated with uncertainties and risks in the ICSRA model. To build a reliable ICSRA Model and
satisfy Agency requirements, the schedule should at a minimum be cost loaded. As discussed in
Section 5.5.12, budget loading the schedule should be avoided as it may not account for
inherent cost uncertainties and risks and would therefore not give an accurate projection of the
P/p’s cost. However, it is possible that cost loading may ultimately double count uncertainties
and risks if the uncertainties and risks are not traceable to the cost estimate. This can be
avoided with proper documentation of both the cost estimate and the planned schedule in the
BoE. More fidelity could be achieved by resource loading the schedule; although, the time
required to do so may create inefficiencies for the purposes of creating an ICRSA if resource
loading is not already part of the P/p’s day-to-day work.
6.3.2.4.2 Procedure 2. Define and Format Costs to be Loaded into the ICSRA Model
It is a best practice for a cost model that replicates the P/p cost estimate to be defined and formatted
for the purposes of performing an ICSRA. It should be noted that a “cost model” is not a “cost
estimate.” Developing a cost estimate is a formal analysis process that generates the actual expected
costs for each WBS item in the P/p plan per the cost estimating techniques defined in the NASA Cost
Estimating Handbook. The cost model as defined herein, is a mathematical model attached to the
schedule that replicates the cost estimate. For example, a cost estimate may go to a very low-level of
detail that is not needed for the ICSRA. Therefore, the cost model in the ICSRA will replicate the cost
estimate at a much higher level. In addition, the cost model is developed in such a way that will allow
for cost growth due to schedule growth caused by schedule risks and uncertainties, as well as cost
growth due to cost risks and uncertainties. The cost uncertainties and cost risks, which are captured by
parameters similar to those for schedule duration uncertainties and schedule risks, are covered in the
following procedures.
The P/p’s Cost Analyst provides the cost estimates to the Schedule Analyst in the Collect Data process
detailed in Section 6.3.2.2 for inclusion in the ICSRA Model. This is best accomplished through the use of
a spreadsheet. Most scheduling software tools can export an MS Excel sheet with the assigned
resources. The Schedule Analyst can export the resource spreadsheet and add extra columns where the
Cost Analyst can capture cost inputs, including parameters to reflect the uncertainty and risk related to
the costs.
Figure 6-41 shows an example from a NASA project to demonstrate the procedure. Step 1 is to export
the resource sheet into an MS Excel worksheet. The upper part of the figure is the Resource Sheet from
MS Project and it is shown being mapped into MS Excel via the downward pointing blue arrow on the
232
left. Step 2 is to modify that worksheet to be able to capture all of the needed cost, risk, and
uncertainty data. This is the worksheet image across the lower part of the figure. Once completed the
sheet becomes the “Cost Interface” sheet and is used as the standard interface between the Cost
Analyst and the Schedule Analyst and supports the ICSRA throughout the P/p life cycle.
Figure 6-41. Example case of mapping costs from the cost model to the MS Project Resource Sheet.
233
Cost data
provided by P/p’s
cost analyst
Per the figure, note that the TI costs will need to be loaded in the field labeled “Cost Per Use”. The TD
costs need to be loaded in the field labeled “Standard Rate”. Since these are cost parameters rather
than actual resources, the cost analyst will not have that specific rate data. The Cost Analyst will have
the total cost for each of the cost parameters. In addition, the exported sheet needs additional columns
to capture the cost uncertainties. For these reasons, the exported sheet needs modification to support
the development of the ICSRA Model. This is the topic of the next step.
234
Add Duration field, copy Calculate rates using the For the specific probability distribution
duration of hammocks durations and the Most selected, the cost analyst provides a
into this field Likely cost field three-point estimate for TD and TI costs
Figure 6-43. This figure shows the additional columns added to the exported Resource Sheet. It also shows the derivation of the
loading parameters for costs and the cost uncertainty probability distributions.
Figure 6-44. Sample resource assignment sheet from MS Project is illustrated in this figure.
Although not shown in the sample project model, cost model names should include WBS identifiers such
that they can be arranged in hierarchical order. This will make it easier for the Schedule Analyst to load
the cost data.
236
The burn rates are calculated from the most likely cost values and the hammock durations. The
probabilistic equations or parameters are defined and linked to the cost data in the worksheet. Then
the cost rates and the probabilistic uncertainty parameters are transferred to the Standard Rate field in
the scheduling software’s resource sheet. The case shown in Figure 6-45 is for MS Project using @Risk
as the Monte Carlo simulation tool. The process is repeated for loading the TI cost values into the
Cost/Use fields.
Figure 6-45. The figure illustrates the procedure for transferring the data from the Cost interface workbook to the MS Project
Resource Sheet.
• Double Counting. Double counting is a situation that occurs in ICSRAs, whereby the potential
impact of uncertainties and/or risks are incorrectly accounted for more than once in an
SRA/ICSRA. Double counting usually occurs when: (1) uncertainties or risks are applied to costs
whose estimates were based on past P/p data that already accounted for similar
uncertainties/risks; (2) wide uncertainty distributions are applied to or costs that account for
risk events, which are also applied separately as discrete risk impacts; and (3) schedule duration
uncertainty drives cost impacts that are also applied separately as cost uncertainties or risks.
The first two cases are described in Section 6.3.2.3.2, and the third case is described below.
o Overlapping Cost and Schedule Uncertainties/Risks. Since some costs are time dependent,
it is possible that cost uncertainty will be double counted if the SME’s judgement regarding
the uncertainty in cost is tied to how long the risk will take instead of the cost rates. The
Schedule Analyst must work closely with the Cost Analyst to disambiguate potential
uncertainty or risk overlaps between cost and schedule.
• Underestimating the Uncertainty. The second issue is that whenever SME judgment is used to
develop uncertainty distributions, there is a tendency for SMEs to underestimate the range of
uncertainty. It has been demonstrated that SMEs tend to only capture a portion of the true
uncertainty in their estimate of the range of potential outcomes of an event.129 Analysts and
engineers tend to overestimate best-case outcomes and underestimate worst-case outcomes.
Thus, it is important that the Cost Analyst compensate for this underestimation through the
128 NASA Cost Estimating Handbook, Version 4.0. February 27, 2015.
https://www.nasa.gov/sites/default/files/files/CEH_Appj.pdf
129 Hubbard, D. W. “How to Measure Anything.” John Wiley & Sons, Inc. 2012.
238
expansion of the parameters used in the uncertainty probability distribution to allow for a wider
range of potential outcomes.
6.3.2.4.6 Procedure 6. Define and Load the Discrete Risk Parameters Impacting Schedule
It is a best practice for discrete risks to be assessed and quantified with respect to cost impacts for
inclusion in the ICSRA. As with the discrete risk inputs to the SRA Model, each individual discrete risk
input for the ICSRA Model will be assigned a probability distribution (e.g., normal, triangular, lognormal,
beta, etc.) that simulates the risk impact. The Schedule Analyst should be able to trace the risk
parameters directly to the RMS and defend the modeling approach to the risk owner and the P/p
management team. If the discrete risks are not already quantified by the P/p, it is a recommended
practice to build the risk parameters directly within the P/p’s formal RMS.
130 NASA Cost Estimating Handbook, Version 4.0. Appendix G. February 2015.
239
Uniform distribution with parameters of 1.10 and 1.20. See Section 6.3.2.3.4, Step 2 for more
information.
6.3.2.4.8 Procedure 8. Test and Verify the Cost and Discrete Cost Risk Inputs
It is a best practice for the costs, cost uncertainties, and discrete cost risk inputs to the ICSRA Model to
be reviewed to ensure that they are captured and calculating correctly in the ICSRA Model prior to
running the simulation, and that they represent the intended model of the cost risk. As stated
previously, appending all of the model components, cost and risk, at the bottom of the IMS or Analysis
Schedule facilitates visibility and promotes easy test and verification that is clearly defensible. As shown
in Figure 6-46, the Schedule Analyst should work with the Cost Analyst to perform a series of checks to
test and verify that the cost hammocks are calculating accordingly, and that performance is consistent
with what the cost estimates would predict for the same test cases.
241
Figure 6-46. Testing and verifying the ICSRA Model is shown in this figure.
While the P/p may implement informal analysis on a routine basis that consists of many different input
scenarios and simulation iterations, more formal analysis is typically executed in three cycles and a final
update: a trial version to verify the modeling, a preliminary version to validate the outputs, a final
version that is ready for external consumption and an update that addresses any feedback from the final
review. Details of each version are shown in Figure 6-47.
Regardless of whether the analysis is informal or formal, it will be documented and communicated at
some level. The analysis is communicated to stakeholders consistent with guidance in Section 8.3.2.
243
The analysis requirements, assumptions, inputs, and outputs are documented as part of the BoE and
backed up/archived consistent with guidance in Section 8.3.3.
6.3.2.5.1.1 Step 1. Test and Verify the Simulation Calculation due to Uncertainties
Uncertainty is inherent in the planned schedule because all schedule durations are “estimates”. Thus, it
is important to understand how the uncertainty inputs affect the SRA Model prior to including discrete
risks in the simulation run. Running the simulation with “uncertainty only” provides a baseline case
from which the inherent risk can be understood and separated from the potentially manageable or
accepted discrete risks. By turning off all of the discrete risks (i.e., deselecting the risks or setting the
likelihoods to 0% prior to running the simulation), spot checks can be performed to understand the
impact of duration uncertainties on individual paths that aggregate in the uncertainty posture for the
overall schedule. The Schedule Analyst should ensure that the effects of the uncertainty distributions
loaded in the SRA Model are calculating as expected. Changes in the “likely” or probabilistic critical path
activities due to uncertainty impacts should be cross-checked for reasonableness. If using an Analysis
Schedule with differing levels of detail for different WBS elements, it may be necessary to revisit how
uncertainty was applied to discrete work package-level activities versus summarized activities.
6.3.2.5.1.2 Step 2. Test and Verify the Simulation Calculation due to Discrete Risks
Once the uncertainty impacts are tested and understood, discrete risk calculations can be checked.
How a discrete risk impacts the overall simulation results depends not only on its assigned likelihood,
consequence distribution and distribution type, but also on whether the activity to which it is mapped
falls on a stochastic (i.e., probabilistic) critical path, or driving path. A risk’s effective impact is also
dependent on whether discrete risks form clusters and have cumulative impacts along certain paths.
While it is difficult to anticipate the effective impact of any single risk on the overall schedule (i.e., this is
the reason why SRAs are performed), a quick check can be performed to show how a risk is calculating
based on how it is modeled. By turning on one risk at a time and applying a likelihood of 100% so that it
is effectively sampled for every iteration of the simulation, the resulting impact of a risk can be
observed. This can be done iteratively for risks assigned to different paths to understand how the
244
individual risks are calculating based on how they are modeled. Risks that are mapped to activities on
the deterministic critical path will likely have more of an overall impact than risks mapped to non-critical
path activities. However, if the “likely” or stochastic critical path activities change due to the impacts of
uncertainty, then the discrete risks tied to these stochastic critical paths will likely have a greater impact
on the schedule finish or end item delivery date.
Figure 6-48. A generalized image of the simulation data table is shown in the figure.
The simulation data table is also used to scatterplots, and correlation parameters. The scatterplots are
then used to determine the JCLs for specific selected outputs, usually cost and milestone completion
date. The JCL is one source of information used to assist management in establishing the MA and ABC.
Normalizing the frequency histogram produces a similar histogram with a total area of 1.0. That curve is
the Probability Density Function (PDF) and will yield the probability of occurrence of any value selected
along the x-axis. If one were to integrate the PDF, it would result in the Cumulative Distribution
Function (CDF). The CDF is often called the Sigmoid-curve (S-curve), which plot the confidence levels
245
associated with possible dates and/or costs. For the CDF, selecting any point along the x-axis, the
corresponding y-axis value will give the probability that the x-value or less will occur, and conversely, the
probability that a value greater than the x-value will not occur. Figure 6-49 illustrates these functions
for a game of Craps. Figure 6-50 is an actual output from a NASA project showing the complete statistics
for a selected output, the frequency histogram and the CDF.
Figure 6-49. The figure illustrates the relationship between the frequency histogram, the PDF and the CDF for Craps.
Launch
Figure 6-50. An example of the frequency histogram and the CDF for a recent NASA project is shown in the figure.
Figure 6-51 illustrates some important characteristics of the CDF and PDF that are useful in
interpretation.
246
Figure 6-51. Examples of CDF and PDF curves that illustrate characteristics useful for interpretation.
• Slope of the S-curve (top left). The Coefficient of Variation (CV) is a normalized measure of the
dispersion of a probability distribution. Mathematically, CV is the ratio of the standard deviation
to the mean and describes the amount of uncertainty in the estimate, expressed as a
percentage. A high CV is associated with a flat slope of the S-curve and conversely, while a low
CV is associated with a steep slope of the S-Curve. High CVs are typical early in the P/p life cycle
because of the higher uncertainty in the cost and schedule data and usually a greater number of
risks, themselves having significant variation in their impact estimates. Later in time, usually
after PDR, the P/p may observe a steepening of the S-curve as the P/p becomes more certain
about the cost and schedule estimates and risks begin to be retired and have lower uncertainty.
• “Backstopped” Schedules (top right). The “dog-leg” on the left in the S-curve indicates that
there is a date “X” at which the P/p can finish no earlier than. This can be a common
occurrence. For example, test facilities are often a shared resource and schedules tend to be
rigid. Even if a P/p is ready to proceed to testing early, it may still have to wait until its pre-
planned time slot in the test facility becomes available.
• Long Tails (bottom left). P/ps often will have one or more high impact low probability risks.
The CDF shows a “tail” that continues to the right with increasing dates. The PDF also shows the
behavior of the tails in that there are a number of low frequency samples that seem to continue
247
as time (date) increases. These cases usually occur with P/p’s having a difficult technology
development component on or near the critical path. More common are high impact risks that
occur during I&T. Most I&T risks are “fix-in-place”, but there are always a few low probability
risks that will require de-integration of the assembly and ship a component back to the
manufacturer.
• Bimodal (bottom right). When the PDF has “humps” as shown in the bottom right figure, it is
indicating that there are two groups of risks or uncertainties that have similar impacts, one
group with low impacts and another with high impacts. This is also seen in the S-curve. It is not
smooth, exhibiting “bumps” that integrate over the two humps in the PDF. This is often seen in
P/ps with low risk development but complex I&T with high impact risks.
Sensitivity. Sensitivity answers the question “how big;” it’s the size of the impact. Sensitivity can be
measured on tasks, risk events or uncertainties. Duration sensitivity is a measure of the correlation
between the duration of a task and the overall duration of the P/p, or a selected task or milestone. The
task with the highest duration sensitivity is the task that is most likely to increase the P/p duration. The
sensitivity of a risk event is a measure of the correlation between the occurrence of any of its impacts
and the overall duration of the P/p. “Tornado” charts are typical outputs of Monte Carlo simulation
tools. A sample from a NASA project is shown below. The chart below shows that risk activity 2733 has
a 60% correlation with the completion date. It is important to note that if activity-to-activity
correlations have been applied, the activities’ sensitivity calculations are likely to be similar.
Figure 6-52. Schedule sensitivity tornado chart shows the correlation of risks with project completion date.
248
Similarly, Cost Sensitivity is a measure of the correlation between the cost of a task and the cost of the
P/p. The task with the highest cost sensitivity is the task that is most likely to increase the P/p cost. Cost
Sensitivity of a risk event is a measure of the correlation between the occurrence of any of its impacts
and the overall cost of the P/p. Although not a standard output in all SRA/ICSRA tools, Duration-Cost
Sensitivity is a measure of the correlation between the duration of a task and the overall P/p cost.
Criticality. Criticality answers the question “how often;” it’s the frequency of the impact. It identifies
tasks that if delayed will delay the P/p end date. Every time an activity is on the critical path, a flag is
set. Upon completion of the simulation the percentage of times that the activity was on the critical path
is calculated. That percentage value is the criticality index and is used to find stochastic critical paths in
the schedule network. In Figure 6-53, a sample output from a NASA project is shown. The criticality is
shown in the figure and is represented by the percentage of times that the activity was on the critical
path. From this data, images of groups of tasks and their criticality index can be developed and produce
a stochastic critical path diagram. An example of a stochastic critical path can be found in Figure 6-56.
P/p management can use the stochastic critical path diagram to focus management attention and avoid
surprises. Criticality is also used in the calculation of cruciality.
Figure 6-53. Schedule criticality chart shows the criticality index associated with each activity.
249
Cruciality. Cruciality answers the question “how big and how often;” it’s a combined look at the size
and frequency of the impact. It is a measure of how crucial the task duration is to the P/p duration. The
calculation multiplies the duration sensitivity by the criticality index. A task with a high cruciality is
highly likely to affect the P/p plan duration and therefore the finish date or end item delivery date. A
cruciality output will show not only if the activity is a driver to a reporting milestone or activity, but
whether it is on the overall network critical path. A driver for a key milestone that is not on the critical
path may show up as having 0% cruciality because the criticality is 0%. This means that while it is a
driver to the key milestone, it is not driving the overall P/p critical path.
Schedule Sensitivity Index (SSI). SSI identifies activities most likely to affect the P/p finish. SSI of an
activity is calculated as follows:
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑦 𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛
𝑆𝑆𝐼 = 𝐶𝑟𝑖𝑡𝑖𝑐𝑎𝑙𝑖𝑡𝑦 𝐼𝑛𝑑𝑒𝑥 𝑜𝑓 𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑦 ×
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑃𝑟𝑜𝑗𝑒𝑐𝑡 𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛
Combining criticality with the activity duration standard deviation gives the highest values to activities
which are on the critical path and have a large range of uncertainty. A sample output is shown in Figure
6-63.
Figure 6-54. SSI is a measure used to find the tasks most likely to impact P/p completion.
SSI is a measure of activity duration uncertainty and may not specifically capture the impact of a risk
with low uncertainty but high impact. Consider for example a 20-day activity with a risk with a triangular
distribution of 40, 45, 50 days. This risk has a low standard deviation but a huge impact on the activity
duration. The activity may have a large SSI but the risk impacting it will have a low SSI.
Caveat. Caution is urged when using these sensitivity indicators. First, they only measure the
correlation of activities and risks against the completion dates or total cost. The correlation values
range from 1.0, perfectly correlated, to -1.0 perfectly uncorrelated. They cannot provide an actual
250
measure of the cost and schedule impact. Secondly, picking the top-n from the list can be misleading
since the P/p logical network can cause the correlations to re-sort the activities and risks when the risks
be removed one (or more) at a time.
• Schedule options based on alternative workflows or alternative technical options (e.g., Analysis
of Alternatives)
• The “most likely” critical/driving path(s) given associated risks and uncertainties (e.g., Stochastic
Critical Path Analysis)
• Uncertainty and risk drivers, as well as risk prioritization for mitigation activities (e.g., Risk
Sensitivity/Prioritization Analysis)
• The probability of meeting the planned schedule or finishing the P/p on time given the
associated uncertainties and discrete risks (i.e., Confidence Level/Completion Range Analysis)
• The probability of meeting both the planned cost and the planned schedule given the associated
uncertainties and risks (e.g., JCL Analysis)
• The potential impact of uncertainties and discrete risks on schedule margin (e.g., Margin
Allocation and Sufficiency Analysis)
• The potential impact of uncertainties and discrete risks on funding and UFE (e.g.,
Funding/Reserves Analysis)
251
Figure 6-55. This plot illustrates typical results from an SRA developed to support risk-based options analysis for completion
date.
Examples of how the options analysis can be used for risk-informed decision-making follow.
• Option A. The black triangle on the Option A S-curve is the deterministic launch date that
results from the IMS output from the Scheduling software. The SRA shows that the probability
of achieving that date is 80%, furthermore it shows that the probability of achieving the
required date is near certainty. At first glance, this seems to be the optimum, lowest risk option.
But suppose this option is not technically robust and it is desired to push some technology
advancements to achieve better performance, such as option B and C.
• Option B. Option B has a low probability (30%) of achieving the planed date and has about a
25% chance of exceeding the required launch date. Option B does have a fair probability of
coming in earlier than any of the other two options. The Option B S-Curve is shallower than the
others indication lower certainty in the outcome (a higher coefficient of variation).
• Option C. Option C has a very low probability of achieving the planed date but almost the same
probability of achieving the required date as B. Option C has a steeper curve meaning that there
is more certainty in the outcome; option B is less certain and has a tail that could result in much
later launch dates.
Decision Making. The SRA is very useful in helping stakeholders to weigh and understand the schedule
options based on alternative workflows or alternative technical options. Information from the SRA
provides a richer decision space. In the example illustrated above, note that all three deterministic
dates meet the completion date requirement. However, without the SRA, the P/p is likely to select
252
Option B, which is not the best choice if meeting the completion date is of highest priority. If advancing
technology to achieve a more robust solution is a higher priority than completion date, the P/p would
not have what it needed to make a decision, both B and C seem acceptable. With the SRA data, Option
C, even though its deterministic date is later, is the best choice, because the result is more certain and
meeting the date requirement has an acceptable probability.
Figure 6-56 shows the criticality results of an SRA performed on a NASA project. The subsystems have
been generalized because the data is SBU. The figure shows a probabilistic critical path that has a 32%
probability of becoming critical (see “Sys Data Validation” box) and should warrant proportional
management attention and possibly some risk mitigation investments.
Figure 6-56. Using the SRA to find the stochastic critical paths can help the P/p avoid surprises. The percentages listed in the
figure are the percent of the samples that were on the critical path (i.e., “criticality”).
Decision Making. The SRA is very useful in helping stakeholders to understand the “most likely”
critical/driving path(s) given associated P/p uncertainties and risks. No matter the criticality of the
253
probabilistic critical path(s), each path should warrant proportional management attention and possibly
some risk mitigation investments. Paths with higher criticality may warrant more attention than paths
with lower criticality. In the example described above, the P/p used this data to replan the upper path
(the 32% path) to reduce the likelihood to zero.
There are several different methods for determining the risk drivers. While risk tornado charts based on
correlation coefficients are widely used and supported by probabilistic SRA tools, recent research has
shown that “ranking risks by correlation coefficients is not a good sensitivity measure, especially for
schedule.”131 However, most other alternatives for determining risk prioritization are more
computationally difficult to implement.132 Thus, the approach typically implemented involves removing
the top risk in the tornado chart and rerunning the simulation to provide an idea about how much
impact the “top risk” has as it relates to the remaining risks. With this approach, risks are backed out of
the analysis (i.e., likelihood of the risk is set to 0%) with the simulation rerun each time and a new
tornado chart generated for each iteration to reveal the subsequent “top risk.” It is important to note
that the order of the “top risks” is dependent on the combination of risks applied to the simulation at
any given time. For example, some risks have greater overall impact on a schedule when combined with
a particular set of other risks. Removing one risk may change the critical path, making a different set of
risks more critical than if the first “top risk” had not been removed. Therefore, it is necessary to rerun
the simulation after removing each risk.133 Figure 6-57 shows a risk sensitivity analysis performed for a
NASA project.
131 Kuo, Fred. “A Mathematical Approach for Cost and Schedule Risk Attribution.” NASA Cost Symposium. 2014.
132 Fussell, Louis. “Margin Allocation Using the Ruhm-Mango-Kreps Algorithm.” NASA Cost Symposium. 2016.
133 “Concurrently Verifying and Validating the Critical Path and Margin Allocation Using Probabilistic Analysis.” Joint Space Cost
0.6
Planned Launch
Date 11/1/2012
0.4
Risk #7 & #2
Mitigate
0.2
8/9/13
9/6/13
11/2/12
1/25/13
2/22/13
3/22/13
5/17/13
6/14/13
7/12/13
10/4/13
11/1/13
4/19/13
11/30/12
12/28/12
Figure 6-57. Typical example of using the SRA to make risk mitigation decisions.
The curve to the far right shows the S-Curve for all risks included. The Schedule Analyst performed a
sensitivity study to find the driver risks. After walking through the risks one-at-a-time, it was
determined that the greatest impact was caused by only two risks. Figure 6-58 shows the risk driver list.
By mitigating those two risks, the project can save 6 months and move the S-Curve back to where there
is a 70% probability of completing on 1/8/2013. Not shown here, but perfectly eliminating all risks does
not significantly improve the completion date. Therefore, it was recommended to mitigate risk 2 and 7
and accept the impact of all remaining risks.
At a later date for this same project, an ICSRA was also ran which would provide the cost savings
assuming that the risks could be perfectly mitigated. Figure 6-58 shows the results of the ICSRA and
indicates that the value of mitigating risks 2, 10 and 7 in both Delta Days and Delta Dollars. Analyses
such as this one provides the project with the information needed to make wise mitigation investments.
Risk Risk
Delta-LRD Delta-$
Baseline> 11/1/2012 $ 3.979 Impact- Impact-
(Days Late) (Overrun)
Case LRD-70* Cost-70* Days Dollars
Mitigate
Figure 6-58. The risk driver list after performing the risk sensitivity analysis by removing risks one-at-a-time. The ICSRA provides
the cost impacts of perfectly mitigating the risks. Risks 2, 10, and 7 cause the majority of the risk impact.
255
Since some risks are not able to be fully mitigated, risk sensitivity and risk prioritization analysis can be
performed using both pre-and post-mitigated risk quantifications. We want to understand the effect of
risk mitigation plans on schedule confidence.
Decision Making. Using an SRA to perform risk sensitivity and risk prioritization analysis is an integrated
schedule management and risk management strategy that can help the PM to proactively manage P/p
discrete risks and increase schedule (and cost) confidence through understanding the schedule’s
sensitivity to both uncertainties and risks. In the example described above, this analysis was used to
help prioritize risk mitigations and to provide guidance for the project’s schedule replanning exercise.
As described in Chapter 5, the IMS is essentially the P/p’s “point estimate” or time-phased plan for
performing the P/p’s approved total scope of work and achieving the P/p’s goals and objectives within a
determined timeframe. Because point estimates are exactly that, “estimates”, the actual P/p schedule
will typically fall within some range of dates with a range of confidence levels that includes the point
estimate. At various times in the P/p life cycle, schedule completion ranges (and cost range estimates),
are required by NPR 7120.5 for the overall targeted P/p schedule (and cost).134 While the NPR does not
specify a specific method for determining the completion ranges, it is a best practice for an SRA/ICSRA
to be routinely performed to estimate P/p schedule completion ranges associated with achieving
planned milestones (at least as often as required). The range provides stakeholders with a high and
low bounds for the likely P/p finish (or end item completion) date, complete with confidence levels
identified for the high and low values of the range. Although there are no specific requirements as to
which high and low confidence levels should bound the range, it is typical for the PM to request the
dates associated with the 30% and 70% confidence level values to establish the range. If the PM wants
to be more conservative, dates associated with 20% and 80% confidence level values may be requested.
Note: The Agency's guidance related to establishing the schedule baseline centers on the JCL.
Considering the 70% JCL “frontier curve,” described in Section 6.3.2.5.3.5, if one picks the point from the
"knee in the curve" of where the percentage for cost is equal to the percentage for schedule, the
corresponding percentage along the S-curve is typically between 78% and 84%, depending on the
134NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020.
256
correlation utilized in the ICSRA Model. As a result, selecting the 80th percentile from an SRA S-curve
allows for traceability back to Agency guidance.
Figure 6-59 illustrates an example of determining completion ranges using an SRA for a recent NASA
Program.
Figure 6-59. An SRA was used to establish the schedule completion ranges for this project.
Decision Making. Performing an SRA early in the P/p life cycle not only provides the data needed to
satisfy the requirement for range estimates, it helps the P/p management understand the probability of
meeting the planned schedule given the associated uncertainties and risks. Subsequent SRAs help the
P/p track how uncertainties, risks, and P/p performance are impacting the schedule confidence level,
which can inform Schedule Control decisions throughout the life cycle.
257
impacts and ensure that the MA and ABC can be achieved with acceptable risk. 135 The JCL is determined
from the scatterplot by finding those cost-schedule pairs that have the specified confidence level. While
70% is the requirement for single-project Programs and projects exceeding an LCC of $250M, it is typical
for the P/p management to request the 30%, 50% and 70% JCL points for the purposes of planning as
well as establishing and tracking performance to the MA and the ABC.
• What is a scatterplot? The scatterplot is created from the simulation data table. The simulation
data table is an output from the Monte Carlo simulation tool that tabulates all inputs and
outputs for each iteration during the simulation. For every iteration, there exists a cost and
schedule pair. Those pairs are plotted with the date as the x-axis and cost as the y-axis. For
example, in the plot below, the Monte Carlo simulation ran 5000 iterations and each point on
the chart is a cost-schedule pair. There are 5000 points on the chart and each point is equally
likely, meaning each point has a one chance in 5000 of occurring.
Final 1 Baseline
$320.00
Quadrant 4 Quadrant 3
$315.00
Frontier Curve. Every point on the
frontier curve has a JCL value of 70%.
$310.00
$300.00
Quadrant 1 Quadrant 2
$295.00
$290.00
$285.00
$280.00
12/1/18 12/31/18 1/30/19 3/1/19 3/31/19 4/30/19 5/30/19 6/29/19 7/29/19 8/28/19 9/27/19 10/27/19
Date, On Dock ATK
Figure 6-60. This figure shows an example scatterplot from a Monte Carlo simulation with 5000 iterations. Each point
represents a single output from the simulation.
135NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020.
258
• Frontier Curve and the JCL Value. There is not a single point that will have the specified JCL or
less, but many points will have that JCL value or less. These values will produce a curve on the
scatterplot that is referenced as the Frontier Curve. Data points up and right of the Frontier
Curve will have JCL values greater than the Frontier Curve and those below and to the left will
have JCL values less than the Frontier Curve. Since a single value is desired for the JCL, it is a
recommended practice for the single point on the scatterplot where the cost and schedule have
the same probability of occurring, usually referred to as the “knee in the curve”, to be selected
as the targeted value for the JCL.
• Asymptotes. There are two asymptotes on the scatterplot. In the figure above, the Frontier
Curve is plotted for the 70th percentile JCL. In doing so, the Frontier curve must asymptotically
approach the individual cost and schedule 70th percentiles. This is because as one traces the
Frontier curve up and to the left, the cost value becomes more likely and the schedule less
likely, but schedule cannot go below the 70th percentile, hence asymptotically approaching the
vertical line. Likewise, for the cost value, as one traces the Frontier Curve down and to the right,
the cost value becomes less likely until it reaches the asymptote. Placing the asymptotes on the
JCL curve helps in locating the Frontier Curve.
• Voids. A question often asked is, “What causes the vertical voids in the scatterplot?” Those are
weekends and holidays. Since the calendar does not permit work on those days, the P/p cannot
finish on those days, hence no possible solutions.
• Quadrants. In Quadrant 1, every solution is less than the individual 70th percentile cost and
schedule confidence levels. The percentage of the solutions in this quadrant yields the
likelihood of the P/p being under cost and ahead of schedule (~66% in this example). In
Quadrant 2, the percentage of solutions in this quadrant yields the probability of being over
schedule and under cost (~3% in this example). In Quadrant 3, the percentage of solutions yields
the probability that the P/p will be over schedule and over cost (~28% in this example). In
Quadrant 4, the percentage of solutions will yield the probability of being under schedule but
over cost (~3% in this example).
A NASA project JCL example follows, and the scatterplots include notes on how management used the
information for decision-making. Figure 6-61 illustrates the determination of the JCL per the NASA
requirement for a P/p at the PDR milestone. Figure 6-62 represents a JCL (i.e., ICSRA) performed for a
Mars mission at CDR to understand the joint cost and schedule confidence levels associated with the
open and close dates of the planetary launch period.
259
$360.00
7/9/2019
$340.00
$330.00
50% JCL
$320.00
5/25/2019
$299M
Cost
$310.00
Cost 70% Asymptote
$300.00 $303M
70% JCL
7/16/2019
$290.00
$304M
PDR Delta from Plan
$280.00 JCL Cost, $M On Dock Cost, $M On Dock, Mo
30% JCL
30% $295 4/24/2019 $18 4.4
4/24/2019
50% $299 5/25/2019 $22 5.4
$270.00 $295M 70% $304 7/16/2019 $27 7.1
Plan $277 12/12/2018 $0
$260.00
11/1/18 1/31/19 5/2/19 8/1/19 10/31/19 1/30/20 4/30/20 7/30/20 10/29/20
Date, On Dock ATK
Figure 6-61. Scatterplot and JCL for a project at PDR. Note from the shape of the scatter plot that cost and schedule are highly
correlated. Often, the asymptotes for the 70% JCL value will not fall on the individual cost and schedule 70% asymptotes.
Figure 6-61 shows the scatterplot for a NASA project as used to establish the JCL at PDR. In this
example, the cost and schedule are highly correlated, evident by the small scatter about the regression
line. In fact, that correlation was approximately $3.3 M/month. This was because there was a
significant amount level-of-effort activity. The values selected for the JCL were at the point where both
cost and schedule had the same confidence level, in this case 73%. Note: The frontier curve will always
be asymptotic to the selected percentile values for cost and schedule individually.
260
Figure 6-62. Scatterplot for a Mars robotic mission.
Figure 6-62 shows the scatterplot for a Mars robotic mission. The analysis was performed at the CDR.
Mars missions have a narrow launch period and careful attention must be paid to the risk environment
throughout the project DDT&E period. For this case, a JCL curve was not needed. Instead, the project
needed to estimate the likelihood it would launch within the launch period and be able to do so with the
available budget. On the plot, there are three rectangular boxes. The one to the left captures all of the
solutions that were ready before the launch period opened, the middle captured the additional
solutions that were ready within the launch period, and the right-most box were the cases where the
project would miss the launch period and need to wait 27 months for the launch period to reopen.
The scatterplot illustrates the following:
1. Unlike the previous figure, there is very little cost-schedule correlation.
2. The number of solutions in the left-most box represented 82% of the total, meaning there was
an 82% probability of being ready on or before the opening of the launch period.
3. Adding in the solutions in the narrow middle box gave a probability of 94% of being able to
launch before the period closed.
4. Walking downward from the $390M budget, and subtracting the known liens and
encumbrances left a comfortable $14M margin down to the 70th percentile cost confidence
level.
5. The project determined that the $14M margin and 94% likelihood of making the launch period
was an acceptable risk posture for the project and continued their current plan without further
risk mitigation.
261
Decision Making. Performing an ICSRA, helps the P/p to understand the probability of meeting both the
planned cost and the planned schedule given the associated uncertainties and risks. This information
can aid the PM in making trades between cost and schedule, as necessary. In the example illustrated
above, the PM provided the information to the stakeholders to help validate the MA and ABC. In
addition, the regression line with a slope of $3.3M/month was used to make “what-if” estimates for
project delays. The PM used the scatterplot to validate the sufficiency of funding reserves. With an 82%
likelihood of completing by the opening of the period, the PM chose to focus on launching early to take
advantage of greater flight performance margin if needed.
There are two cases to consider when using the SRA to establish the margin: Case A is the
determination of sufficiency of margin for a required date, and Case B is for the renegotiation of a
completion date to achieve adequate margin. In Case A, if margin is inadequate, replanning or risk
mitigation are the only options available. In Case B, the completion date is renegotiated to provide
sufficient margin. In Case B, the completion date is renegotiated to provide sufficient margin. Figure
6-63 illustrates these techniques.
136
PASEG, Version 3.0. National Defense Industrial Association (NDIA), Integrated Program Management Division (IPMD).
March 9, 2016. Page 73.
262
Figure 6-63. Examples of how the SRA is used to confirm adequacy of margin.
• Case A. The Plan S-Curve shows that even if all margin is utilized to accommodate risks, there is
only a 40% likelihood of achieving the required completion date. A risk-prone P/p may choose to
accept this position, but not usually. Replan or mitigate option A will yield a 30% likelihood of
on-time completion without having to allocate any margin and the margin covers risks all the
way to a 90% likelihood of on-time completion. A risk-averse P/p may want to opt for replan or
mitigate option B which shows margin that covers the range from 80% to near certainty.
• Case B. In this case, the project and the stakeholders agree to renegotiate the completion date
until there is adequate margin to cover risks and uncertainties. Again, the propensity for risk
acceptance will affect the choice just as it would in Case A. For example, the Plan S-Curve shows
that margin will cover risk impacts between the 50 and 90th percentiles, however a risk-averse
P/p might want to go with Option B.
Decision Making. Performing an SRA can help the P/p determine the adequacy and placement of
margin in the schedule based on the P/p’s uncertainty and risk postures and schedule performance-to-
date. When performed early in the P/p life cycle, this analysis can influence the schedule baseline to
include adequate schedule margins. For example, the P/p can select the 50th percentile as the planned
schedule completion date and hold schedule margins to the 70th percentile. Also, by selecting the SRA
263
outputs for subsystems feeding into I&T, the P/p can use the same strategy to establish margins for
subsystem deliveries. This approach is most helpful before the baseline is established (PDR), because
the P/p still has the flexibility to adjust the schedule to align the completion date with the desired
confidence level.
It is a best practice for an ICSRA to be routinely performed to demonstrate the cost reserves are
sufficient to accommodate schedule delays due anticipated uncertainty and risk impacts. Oftentimes
reserves levels are dictated by standards and rules of thumbs. However, conducting ICSRA will provide
further insight on whether currently held cost reserves are sufficient to identified uncertainties and
risks.
Decision Making. Performing an ICSRA will provide, perhaps, an early indicator of whether there are
insufficiencies in the budget based on the P/p schedule plan. P/p planning can be enhanced by
integrating costs to the schedule, as well as risks. The results generated from an ICSRA facilitate a better
understanding of the adequacy of reserves and the importance of schedule and cost alignment. An
indication of inconsistent schedule and cost alignment may warrant that the Health Check, Resource
Integration Assessment, and the Critical Path and Structural Check should all be executed again to
address errors or defects
264
Oct. ’00 - Baseline Plan,
Zero Margin
Nov. ’00 - Current Plan,
Zero Margin
Dec. ’00 - 12/7/00
Ready Early
Ready to Launch
Margin
Jan. ’01 -
Plan
Launch
Feb. ’01 - Period
Mar. ’01 -
Current Plan,
Risk-based
Apr. ’01 - Zero Margin,
Margin
With Risks
LEGEND May ’01 -
Missed Launch Period
Risk-based Jun. ’01 -
Launch Ready Current Plan,
PDR
3/99
8/99
9/98
& Confidence Zero Margin
3/00
CDR
Jul. ’01 -
–20%-tile
Figure 6-64. An example of how the SRA can be used to track the adequacy of margin to accommodate the risk exposure over
the life cycle of a project.
Decision Making. The project used the information to target risk mitigation investments over time. At
the last point, where the completion date moved into the yellow caution area, the result was considered
with other information to consider whether or not to replan the launch.
6.4 Skills and Competencies Required for Schedule Assessment and Analysis
The skills and competencies required for Schedule Assessment and Analysis can be found on the ScoPe
website.137
“Most every P/p will inevitably experience changes. To ensure successful P/p execution, effective
change control and disciplined maintenance procedures are necessary. The key is to determine how
the project will approve and track changes as they occur throughout the P/p life cycle. Change can
occur simply by work progressing more quickly or slowly than planned, as well as when changes in
other elements of the P/p occur and/or whether the P/p team decides to modify its approach to the
P/p work.”139
The Schedule Maintenance sub-function ensures that the schedule is updated routinely and maintained
with accurate progress status and approved changes. In addition to establishing the schedule baseline,
the Schedule Control sub-function consists of the measurement of current performance and the
estimating of future performance, as well as the corrective actions required to bring the performance
back into compliance. It is a NASA requirement that the P/p regularly ensure it can meet the internal
MA and the external ABC with acceptable risk. Maintaining the schedule and tracking and monitoring
performance metrics, variances, and trends helps to identify the corrective actions needed to maintain
internal and external commitments.
SM.MC.1 Schedule is • The schedule is maintained and controlled according to the Schedule
Maintenance and Control Management Plan.
Follows the SMP
SM.MC.2 Schedule is • The schedule is baselined at the required, or otherwise appropriate, point
Baselined in the P/p life cycle.
SM.MC.3 Schedule • The schedule baseline and subsequent updates are informed by the
Baseline/Updates are Planning Programming Budgeting and Execution (PPBE) process.
Informed by PPBE
SM.MC.4 Schedule Baseline is • The schedule baseline is validated by means of a review that includes P/p
Reviewed and Validated management, P/p staff, peers and stakeholders.
SM.MC.5 Schedule is • The schedule is maintained/updated to reflect the current status using
Maintained Updated Using actual progress.
Actual Progress
139 PMI. Practice Standard for Scheduling. Second Edition. Page 33.
266
SM.MC.6 Schedule • Schedule performance is routinely tracked and measured against the
Performance is Tracked and schedule baseline to identify, monitor, and control schedule variances and
Measured Against Schedule trends.
Baseline
SM.MC.7 Deterministic • Deterministic techniques and metrics (both EVM-based and non-EVM-
Techniques/Metrics are Used based) are used to measure schedule performance and monitor trends.
to Measure Schedule
Performance and Monitor
Trends
SM.MC.8 Schedule Margin is • Schedule margin is tracked and monitored throughout the P/p life cycle.
Tracked and Monitored
SM.MC.9 Probabilistic Risk- • Probabilistic risk-based techniques and metrics used to measure schedule
Based Techniques/Metrics are performance and monitor trends.
Used to Measure Schedule
Performance and Monitor
Trends
SM.MC.10 Corrective Actions • Corrective actions are identified and tracked throughout the P/p life cycle
are Identified and Tracked to facilitate traceability to formal schedule baseline changes.
throughout the P/p Life Cycle
SM.MC.11 Schedule Baseline • The schedule baseline is updated according to corrective actions
is Updated according to (replanning or rebaselining), as needed, as part of the P/p’s change control
Corrective Actions process.
7.2 Prerequisites
Schedule Maintenance and Control can be initiated when:
• The SMP sub-plan, Schedule Maintenance and Control Plan, which captures the requirements
for updating the schedule, tracking performance measurements, and taking corrective actions to
control the schedule throughout the lifecycle, is available
• The Schedule Database is completely developed and populated with all data, including:
o Completion of all BoEs
o Output of the IMS, Summary Schedule, and Analysis Schedule if appropriate
o Collection of all supporting documentation (e.g., PPBE guidance document)
This complete set of information and data must be identified, marked, and archived according to the
P/p’s CM/DM process. Once the Schedule Database is complete and entered into version control, the
P/p must ensure that the IMS has been successfully verified through the Assessment and Analysis sub-
functions described in Chapter 6 of this document before baseline approval. For Space Flight P/p’s (NPR
267
7120.5), if the LCC exceeds $250M, the P/p will be required to perform an ICSRA to determine the JCL,
which is used in decision making to establish the MA and ABC, prior to baselining.
268
Figure 7-2. The Schedule Maintenance and Control sub-function consists of two maintenance and three control procedures.
Either method is acceptable. However, the first method (baseline the entire IMS) would necessitate a
more rigorous, disciplined, and labor-intensive baseline control process since any change to schedule
baseline data would need to be documented. The second method, sometimes referred to as
“notification and control,” still requires baseline control with documented changes but only for those
tasks and milestones that have been selected as part of the schedule performance baseline (or formal
269
PMB, if applicable). This enables the P/p team to make adjustments to planned tasks without formal
baseline change documentation as long as the changes do not impact the selected baseline data.
If using a “notification and control” approach, schedule control should be focused on a carefully selected
and meaningful set of tasks and/or milestones from the baseline IMS. This set of control tasks and
milestones may include, but are not limited to, items such as:
• Contract Milestones
• Major Integration Milestones
• Key Procurement Milestones
• Critical Test Activities
• Hardware Deliveries
• Facility Readiness Milestones
• Technical Reviews
• Verification Milestones
• Operational Readiness Milestones
Another important consideration relative to baseline content is whether the P/p has a requirement to
implement EVM. For P/ps requiring EVM, there is a more direct and rigorous relationship to be
maintained between the total schedule baseline and performance measurement processes, since the
schedule baseline is an integral component of the PMB, which is used as a basis for EVM calculations.
For P/ps not employing EVM, there is usually more flexibility in determining the schedule baseline
content.
The schedule CCB is made up of selected members of the P/p management and support teams who are
responsible for the schedules and who have decision making responsibilities at the P/p level. The CCB
can be established at the project level with representation from major subsystems (e.g., observatory
manager, instrument systems manager, integration manager, etc.), or depending upon the
organizational structure and interfaces, at the Program level with representation from various projects
(i.e., Deputy PMs, Technical Leads, Integration Managers, Resource/Business Managers, etc.).
The CCB meets on a regular basis to discuss upcoming known or anticipated changes to selected critical
milestones, LCRs or other major reviews, KDPs, or key integration events identified in the schedule.
270
When warranted, the board members initiate and review Baseline Change Requests (BCRs), as shown in
Figure 7-3, evaluate potential impacts, and decide which changes can be accepted (or rejected),
implemented, and reported out. This documentation typically consists of an MS Excel file listing each
non-compliant item and the appropriate response, providing either a corrective action or retention
rationale. Other formats are acceptable if they serve the same purpose. The proposed corrective action
or retention rationale report must be approved by P/p management. BCRs are routed through the P/p’s
CM/DM process.
Figure 7-3. Example of baseline change request (BCR) for the IMS.
271
If the CCB approves a corrective action, the P/p’s CM/DM or change control process will help to manage
the technical and programmatic baseline changes. What follows is a description of the typical process
sequence for a P/p engaged in schedule baseline control. This process is used regardless of whether the
proposed change impacts the internal or external baseline commitment.
• A BCR, or equivalent, for the IMS is initiated by a responsible P/p Technical Lead, responsible
contractor, or other outside customer or stakeholder source.
• The P/S should coordinate with the responsible change initiator who originates the BCR to
determine the resulting impacts caused by the proposed change. Impact analysis should be
conducted on both external and internal schedule baselines. This may require the preparation
of “what-if” versions to assess the impact of the proposed change utilizing either the original
baseline plan, the currently approved baseline plan, or the current updated IMS.
• The BCR not only documents a clear description of the proposed change, but also the “before”
and “after” effects of the proposed change on the internal and external schedule and budget
baselines.
• The BCR is then brought to the governing P/p CCB and reviewed in accordance with the P/p’s
Schedule Control Plan and Configuration Management Plan. Note: If the proposed schedule
changes impact Schedule MA or Schedule ABC, then review and approval will also be required
by the appropriate governing change boards and stakeholders.
• Once the schedule BCR has been formally approved by all the applicable CCBs, the P/S issues an
updated schedule with a new revision designator.
• Changes to baseline data are important and should be tracked at the level of detail baselined.
The P/p P/S should maintain a schedule baseline change log to record and track approved
changes to the baseline, as shown in Figure 7-4. This log provides the on-going schedule
baseline traceability required for sound P/p configuration control.
272
Commitment (ABC).140 The Management Agreement (MA) is an internal agreement and is often subset
of the ABC.141 Unless otherwise directed, the P/p’s integrated baseline content, which includes the
schedule baseline, is the basis for the MA. The following definitions and guidance are provided to
ensure conformity with current Agency policy for P/p schedule baseline management:
• Agency Baseline Commitment (ABC). The ABC is NASA’s commitment to Congress that the P/p
will not exceed its identified external development cost and or schedule. The schedule portion
of the ABC (Schedule ABC) is typically the planned launch date for a Space Flight P/p but may be
a different end-item delivery date or key milestone date for other P/p types, such as Research
and Technology P/ps. For Space Flight P/ps (NPR 7120.5), the ABC is established as part of KDP
C approval. For Research and Technology P/p’s (NPR 7120.8), the ABC is established as part of
the Program Approval KDP. The ABC is documented in a Decision Memorandum (DM).
• Management Agreement (MA). The MA is the internal agreement between the Center, the
Mission Directorate, the P/p, and the NASA Administrator that the P/p will not exceed its
identified internal development cost and or schedule targets. The schedule portion of the MA
(Schedule MA) is typically set to a prior to the ABC Schedule and is consistent with the P/p’s
baseline IMS, as well as the P/p’s integrated performance baseline, or PMB. The MA is
documented in a DM. The period of time between the Schedule MA and Schedule ABC
indicates the available schedule margin between the internal and external commitment dates,
usually controlled and funded by the Mission Directorate. The MA date should be included in
the P/p IMS to facilitate execution and performance measurements, as well as in associated
schedule reporting products to aid in decision making and communication with stakeholders.
In some cases, P/ps will establish an internal baseline end-item delivery or launch date in
advance of the MA date to allow for additional schedule margin.
Figure 7-6 describes the relationship between the P/p baseline schedule, integrated baseline, MA, and
ABC.
140 Per NPR 7120.5, “The ABC establishes and documents an integrated set of project requirements, cost, schedule, technical
content, and an agreed-to Joint Confidence Level that forms the basis for NASA’s commitment with the external entities of
OMB and Congress.”
141 Per NPR 7120.5, “Within the Decision Memorandum, the parameters and authorities over which the PM has management
control constitute the P/p Management Agreement. A PM has the authority to manage within the Management Agreement
and is accountable for compliance with the terms of the agreement.”
273
Figure 7-5. Illustration of the relationship between the P/p baseline schedule, the P/p integrated baseline, the MA, and the ABC.
Figure 7-6 illustrates how an MA and ABC are related to the P/p’s life cycle.
Figure 7-6. Illustration of MA and ABC as it pertains to the NASA P/p life cycle.
Note: The ABC is not always established as the P/p’s “work-to” finish date (e.g., launch date), but may
have additional margin to accommodate risks and uncertainties to ensure that the ABC can be met with
acceptable risk. The period of time between the MA date and the ABC date often indicates the HQ-held
schedule margin.
When creating the schedule baseline, it is necessary to establish what schedule content will be formally
maintained and controlled. It is important to consider the external schedule commitments made and
where they are included in the IMS. It is crucial that the schedule baseline encompasses the total
approved scope of work, as defined in Section 5.3, and accurately models the P/p’s plan for
274
implementation. The SMP describes how the P/p encompasses the total approved scope, which
depends on whether the P/p is primarily in-house, or includes contractor work or external partners:
• In-house Schedule Coordination. The P/S(s) should support the Technical Leads for all
scheduling requirements and perform Schedule Management Planning coordination and
detailed Schedule Management Development for all in-house P/ps in partnership with other
organizations, as required, to facilitate the Schedule Control sub-function. In-house effort could
include the entire P/p, or a major element within the P/p such as the spacecraft, a single
scientific instrument, or systems integration and test. The role of support service contractors
would also be considered part of in-house effort.
• Contractor Schedule Coordination. To effectively integrate contractor schedule data into the
P/p IMS to enable Schedule Control, it is imperative that a clear understanding exists between
the government and contractors about such details as schedule content, level of detail, formats,
reporting frequency, tools, thresholds, responsibilities, and controls. Anything that can be done
during P/p initiation and Schedule Management Planning to clarify what is expected of the
contractor will reap huge benefits in saving time and money and reduce stress and frustration
levels in the personnel carrying out P/p implementation. This approach will also serve to
provide additional risk mitigation throughout the P/p life cycle.
• External Partner Schedule Coordination. External partners include those with other Agencies,
universities or other research institutions, international partnerships, or other business
arrangements not involving contracts or procurements. Reporting requirements dictate the
level of detail from partner schedules included in the P/p IMS. While Schedule Control of
external partner schedule inputs is often limited, it should be noted that some arrangements
permit NASA schedule expertise to be used in the development of partner schedules to facilitate
integration for enhanced management capabilities, including Schedule Control. For example,
some Science Mission Directorate (SMD) projects with deliverables provided by universities
often provide direct scheduling support to the institution from their own project scheduling staff
to assist in schedule development and status reporting.
275
to the MA. P/ps may tailor their approach to establishing the Implementation Phase schedule baseline
using the guidance in Figure 7-7.
Figure 7-7. The table provides guidance on establishing the schedule baseline using different methods.
The Implementation Phase schedule baseline is established (i.e., “set,” “struck,” or “frozen”) at the task
level of the IMS based on the early start dates, early finish dates, and durations for the tasks in the IMS
for all of the IMS methods. In some cases, for out-of-house work, the contractor schedules may already
be baselined in accordance with contractual requirements or internal contractor PP&C or EVM policies
and procedures. This presents challenges for how P/ps will establish the Implementation Phase
schedule baseline in a manner that includes these out-of-house elements. Options include:
• Overriding the contractor’s schedule baseline (if any) by setting new baseline dates at PDR or
Program Approval, as appropriate, or when a rebaseline occurs
276
• Including the contractor’s current baseline dates as contained in their native schedules (normally
the preferred approach)
• Negotiating with the contractor’s scheduling office for the contractor to set (or reset) the schedule
baseline dates in designated IMS file fields
If a summarization of the contractor’s IMS is included as part of the P/p IMS using one of the IMS
methods described in Section 5.6.1, the P/S must ensure traceability between the contractor’s IMS and
the summary IMS prepared by the P/S.
Some contractors have adopted approaches that use the “late” start and finish dates for setting the
schedule baseline. While this concept may be part of these contractors’ PP&C processes, a “late date”
approach is a gaming technique designed to minimize schedule variance reporting to customers. P/ps
may have little leverage changing this approach used by some contractors and will need to recognize it
when examining schedule performance.
Before statusing and updating, or incorporating changes through replanning or rebaselining, an “original
baseline” version of the Implementation Phase baseline IMS should be documented. Recommended
approaches for documenting the original baseline include archiving a copy of the Implementation Phase
baseline IMS file or using one of the alternative sets of baseline fields, date fields, or start/finish fields
within MS Project (or other scheduling tool) to save the tasks’ baseline early start dates, early finish
dates, and remaining durations.
For Research and Technology P/p’s, NPR 7120.8 requires that the P/p, “Document the initial estimated
LCC, the annual cost breakdown, or other appropriate cost description for the P/p, including costs for
each participating Center, if applicable, that is consistent with the project WBS, schedule, and
performance parameters to form the project estimate baseline.”143
142 NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020. Page 60.
143 NPR 7120.8A. NASA Research and Technology Program and Project Management Requirements. Effective Date: September
278
Labor plan not
consistent with
planned budget
Peaks occur
too late in the
project
Award
Production Start Vehicle 1
PRR SRR Contract PDR CDR Integration 1st FLT 2nd FLT
NOA
FTEs
Budget
Schedule baselines must be approached as an integrated P/p team product. Whether the P/p is
primarily composed of in-house, contractor, or external partner effort, it is the total P/p team, including
contractors and external partners, if applicable, which must claim ownership of the schedule content
and its validity. As data is reviewed and changes are identified, final revisions should be made to the
schedule before approving the baseline. A schedule management process should be implemented
within each P/p organization that dictates that prior to schedule baselining, the PM along with each of
the responsible government, contractor, and/or external partner Technical Leads, Programmatic Leads,
and associated COTRs (and/or his designated representatives) must perform a thorough IMS review.
Thus, it is a best practice for the schedule baseline to be validated by means of a review that includes
the P/p management team, P/p staff, peers, and stakeholders (e.g., contractors, external partners,
etc.). This review should cover not only schedule content, but also task/milestone sequencing,
associated resources, slack (float) analysis, probabilistic schedule uncertainty, and all valid constraints
that apply.
At a minimum, the P/S should review the results of these changes with all the affected P/p team
members. During the final IMS review the following activities should occur:
279
• Verify major P/p milestones that have changed and gain approval from the PM and the affected
organizations. In the event that the result of these revisions causes a major milestone date
change that is unacceptable, the P/p team must return to the review process.
• Evaluate P/p risks that have already been documented as well as identify potential new risks to
ensure impacts have been adequately factored into either task duration, risk, or duration
uncertainty estimates, as well as overall P/p duration estimates. Note: The aggregate duration
impact of task duration uncertainties and discrete risks will also help in evaluating the adequacy
of schedule margin identified in the IMS.
• Evaluate and ensure that there is adequate schedule margin and that it is clearly identified as
such in the P/p schedule. Performing an SRA as described in Chapter 6 will assist in this
evaluation.
• Ensure that the schedule is in congruence with the budget and workforce baselines.
• Enlist a commitment to P/p management from all P/p stakeholders affected and responsible
entities to adhere to the plan as reflected in the baseline. This will involve all required resource
and task performance commitments (i.e., dates, facilities, personnel, etc.).
• Save and archive an electronic copy of the approved P/p baseline for Schedule Communication
and Documentation purposes, as described in Chapter 8.
• Establish configuration control of both in-house NASA and contractor schedule baselines.
Validation of the schedule baseline must be completed prior to the KDP C milestone to support the pre-
approval reviews (e.g., EVM Assessment, IBR, PDR, as applicable). Review and approval of the schedule
baseline should not be taken lightly, and any changes should be carefully controlled. 144 This is
especially true when utilizing EVM techniques.
• When EVM is Required: For P/ps whose LCC exceeds $250M, it is a requirement to perform
EVM.145 Whenever EVM is to be performed, a PMB is required to ensure that the P/p’s work is
properly linked with its cost, schedule, and risk and that the management processes are in place
to conduct project-level EVM.146 Per 7120.5, “for P/ps requiring EVM, Mission Directorates shall
conduct a Pre-Approval IBR as part of their preparations for KDP C to ensure that the P/p’s work
is properly linked with its cost, schedule, and risk and that the management processes are in
144 NASA/SP-2014-3705, NASA Space Flight Program and Project Management Handbook states, “IBR is required to verify
technical content and the realism of related performance budgets, resources, and schedules. It is a risk-based review of a
supplier’s PMB conducted by the customer (e.g., the Mission Directorate, the program, the project, or even the contractor over
its subcontractors). While an IBR has traditionally been conducted on contracts, it can be effective when conducted on in-house
work as well. The IBR ensures that the PMB is realistic for accomplishing all the authorized work within the authorized schedule
and budget and provides a mutual understanding of the supplier’s underlying management control systems. Subsequent IBRs
may be required when there are significant changes to the PMB such as a modification to the project requirements (scope,
schedule, or budget) or a project replan.” Page 369. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150000400.pdf
145 NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20160005291.pdf
280
place to conduct P/p-level EVM.” The initial IBR should be accomplished within the first three
to six months after contract award or after approval of the P/p Plan by the Mission Directorate
Associate Administer (MDAA), or Mission Support Office Director (MSOD). On P/ps of short
duration, it may be necessary to initiate the review earlier in order to make best use of the
information derived from the review. During this review joint contractor and government teams
review the total P/p cost, schedule, and technical baseline for the purpose of ensuring that a
valid baseline is in place and that a mutual understanding and agreement exists in the scope of
work and in the amount of resources required. An IBR additionally identifies and alerts the P/p
team to potential P/p risks.
• When EVM is Not Required: For P/ps less than $250M, it is a recommended practice to use
concepts and procedures defined in the NASA IBR Handbook to tailor an appropriate review to
validate the baseline prior to KDP C.
These types of reviews should be carried out prior to initial P/p baselining and then repeated as
necessary after major P/p changes or at specified intervals and/or events (i.e., Program Planning Budget
Execution (PPBE) updates, Preliminary Design Reviews (PDR), Critical Design Reviews (CDR), etc.)
accompanied by PM buy-in to ensure on-going schedule integrity. It must be stressed however, that
establishing new IMS baseline schedule dates should only occur when major scope and/or budgetary
changes have been encountered, which require formal approval by P/p management and may require
formal rebaselining procedures as detailed in Section 7.3.4.7.
7.3.2 Procedure 2. Maintenance – Update Schedule with Actual Progress
It is a best practice for the schedule to be maintained/updated to reflect the current status using
actual progress. Establishing and maintaining a credible schedule baseline is integral to sound P/p
Schedule Management. The IMS reflects both the P/p-approved time-phased plan (including all
subsequent approved changes), and the time-phased plan with its current task progress, sequence, and
forecasts. Once the IMS has been baselined, the Schedule Database is routinely updated with current
progress so that the scheduling tool can recalculate the total float, critical and driving paths, and the
projected completion dates. It is critically important for the IMS to accurately and realistically reflect
the current plan to complete the remaining authorized scope as contained in the baseline.
It should be understood that schedule baseline data represents the original plan, while current schedule
data reflects actual and forecasted outcomes. The “current schedule” is defined as the schedule
baseline with all approved content changes as of “time now”, or the status date. All work to the left of
time now reflects “actuals” or work that has already occurred. All work to the right of time now is
“future work”. Schedule updates are typically provided by the Technical Leads, contractors, and/or
external partners according to a status window that encompasses any updates since the last status
update to time now. It is a recommended practice that all updates to the schedule be provided
according to a single status date. The time now status date is typically aligned to month-end closeout
dates. Routine updating ensures that the IMS reflects the current status and logically calculated forward
plan of the P/p. As the IMS is updated, the P/p can track performance against the schedule baseline.
Status updates should be made as frequently as feasible. The frequency is oftentimes dependent upon
what phase the P/p is in, who is doing the work (in-house NASA, contractor, or both), as well as the
number of resources available to gather, input, and analyze the new status updates. Typically, early in a
281
multi-year P/p, a monthly update is adequate, but if the necessary resources and processes are in place,
then weekly or bi-weekly may be the preferable interval. P/p scope that is being implemented by in-
house NASA organizations provides the flexibility of updating progress weekly or bi-weekly. P/ps with
prime contractor involvement will generally get an updated IMS from the contractor monthly. This
situation limits the capability of providing management with a fully integrated and updated IMS to a
monthly cycle. There are always some exceptions to the update guidelines established by a P/p that
may come into play, such as, the type of work being done may dictate the frequency or the level of
visibility required. Also, some schedule items are designated as “management reporting”
tasks/milestones and may require a more frequent update cycle.
It is important to note that this procedure is not indeed to change any baseline data within the Schedule
Database. The baseline data is a part of the integrated performance baseline, or PMB, and cannot be
changed without going through the formal CM/DM process for making changes. For instance, activity
durations and workflows can be replanned as part of the routine updating processes, as long as they do
not impact the formally tracked baseline dates in the schedule (i.e., activity and milestone types
identified in Procedure 1. However, when changes to baseline dates are required, they must be formally
processed through the P/p’s CM/DM process. Any non-compliances, defect reports, and resolutions
should be documented. Procedure 4 in Section 7.3.4 explains the process for when schedule
maintenance updates impact the schedule baseline and require official corrective actions, such as replan
or rebaseline activities.
282
Activities may also require informational changes, which are revisions to task/milestone notes,
descriptions, or coding data. If a change control process is in use, it should be consulted and followed
for these revisions. If a change control process is not in use, it is recommended that changes of this
nature be done in a consistent fashion and in keeping with the existing P/p guidelines for task
nomenclature and coding.
Note: Minor modifications to network logic are necessary on occasion to maintain an accurate
reflection of the work being performed. It is highly recommended however, that a change control
process be established and adhered to for logic changes that result in impacts to contractual or other
management control milestones. Logic modifications have a direct impact on the planned (calculated)
dates for activities, including contractually required and management-directed events. Before and after
any significant logic modification, an electronic copy of the schedule should be made and stored for
safekeeping. A record of the change should be kept in appropriate notes fields in the IMS, along with
the reason for the change, and the person authorizing the change, particularly if the change impacts a
baselined activity or milestone reflecting a P/p commitment.
Add New Activities/Milestones
New activities and/or milestones may be added to better define existing work scope or to add new work
scope. Both scenarios require adherence to existing schedule controls for descriptions, structure,
coding, network logic, duration, resource allocation, and risk data. Often the addition of new activities
results in a longer duration for the overall schedule, making it necessary to address some of the
assessment and analysis sub-functions defined in Chapter 6. When risk mitigation activities are added
as new work to the schedule, it is a recommended practice that they be flagged or coded as such within
the IMS.
Note: During the P/p’s schedule status update cycle (e.g., monthly), it is not uncommon to add new
activities to the IMS in the very near term (i.e., within the following month) for increased schedule
management visibility. It is optional whether the P/p sets baseline dates for these tasks since they can
be considered logical enhancements to the current schedule forecast and are valuable for managing the
P/p effort. For example, if the status date of the IMS is June 30th, and three new activities are added to
the current IMS for completion in July, it is not necessary to set baseline start and finish dates for them.
The process for handling these activity additions in the current P/p IMS should be described in the P/p’s
SMP.
Delete Existing Activities/Milestones
As changes are made to the technical content of the P/p or when descope plans are executed, activities
will need to be deleted. Before deleting any task or milestone from the Schedule Database, existing
network logic interdependencies and resource allocations should be reconciled. It is a recommended
practice to properly codify and retain deleted activities within the IMS for a few update cycles just to be
able to retrace, verify, or restructure logic ties should an error be discovered.
The impact, if any, of these updates should be assessed against and/or reconciled with the IMS baseline.
7.3.2.3 Steps for Updating the Schedule
The detailed steps for updating the schedule are listed below:
• Gather task/milestone status. This may be accomplished in various ways such as providing task
owners with a printout containing their specific tasks that require update information, face-to-
face meetings with task owners to discuss and redline the schedule copy, or establish weekly, bi-
weekly, and/or monthly P/p IMS update meetings with all task owners participating by verbally
providing their status. Regardless of the strategy for gathering updates, the P/p must ensure
that progress given is consistent with the pre-established task completion criteria documented
in the SMP.
• Incorporate the gathered status updates into the Schedule Database. Enter Actual Start and
Actual Finish dates into the Schedule Database. For in-progress activities, the percent complete
is entered. It is important to understand that many scheduling tools offer different “%
Complete” fields with different functions. The “% Complete” commonly in the default view for
most scheduling tools is actually “percent duration complete” and may or may not be directly
related to the “physical work percent complete.” This is particularly important when using
earned value because earned value does not reflect days consumed but dollars consumed. If
subject to EVM, the same PMTs identified during Planning should be used for making updates to
the schedule activities. PMTs are further discussed in Procedure 3. For resource-loaded
schedules, the scheduling tool may also have “Actual Work” or “Remaining Work” fields that
help the P/S from arbitrarily entering percentages that may not be physically calculating. Using
these fields will automatically update the value in the “% Complete” field.
• Ensure that data is provided for all activities that are time phased to the left of the current
status date. Reflect activities as actually started and in-progress (with proper completion
forecast), actually completed, or re-forecasted to a more accurate start and/or completion date.
(Beware that some schedule management tools, such as MS Project, do not force the user to
update the status of on-going or behind schedule tasks/milestones). It is important that all
incomplete tasks/milestones in the schedule be updated to a single status date, including
tasks/milestones that should have started or completed, but have not.
• Use “Remaining Duration” as the primary method for providing status of in-progress activities.
This will keep projected finish dates accurate, as well as successor linked activities/milestones
properly time phased.
• Analyze schedule impact and resolve issues. After all status updates have been incorporated
into the Schedule Database, it is important to analyze schedule impacts and resolve all issues
resulting from the new status updates. This analysis includes, but is not limited to: identifying
the current critical path and comparing to the previous critical path, identifying and correcting
284
status input errors, identifying tasks/milestones with missing status, identifying new schedule
related risks, identifying necessary logic, resource, and calendar changes that are required, etc.
• Ensure the IMS reflects, as accurately as possible, the current plan for accomplishing the
remaining work. This will involve updates to network logic, remaining durations, and actual
start and finish dates.
• Copy and archive IMS versions to the P/p’s schedule repository and the Agency Schedule
Repository, if applicable, prior to each update cycle.147 This will ensure proper historical records
for future audit activity and to provide a source of reliable schedule duration information for
future duration estimating and validation. It also supports the P/p’s CM/DM function.
7.3.3 Procedure 3. Control – Measure Performance and Monitor Trends
It is a best practice for schedule performance to be routinely tracked and measured against the
schedule baseline to identify, monitor, and control schedule variances and trends. The purpose of the
schedule baseline is to provide a basis for progress measurements that the P/p can use to control the
schedule performance. It is important to note that “progress” is a status of “where things are at” on the
P/p, whereas “performance” is a determination of “how well a P/p is executing to plan.” Tracking
progress against the schedule baseline according to the Schedule Performance Measures, as
documented in the SMP, and executing prescribed actions when predefined thresholds are breached is
how the P/p adjusts the plan to meet stakeholder commitments. For instance, the P/p can use these
performance indicators throughout the life cycle to further inform future activity duration estimates,
possibly enabling the P/p to employ replanning techniques to control the schedule without having to
formally rebaseline.
The control processes established for managing the schedule based on current performance inputs
should not be confused with the formal baseline change process. The purpose of the measuring current
performance and monitoring trends is to provide an accurate reflection of what has been accomplished
along with an accurate representation of how the future work will be carried out, regardless of what the
baseline schedule shows. Maintaining accurate current schedule performance data should not be
hampered by formal control processes, but rather by informal rules. These informal rules may include,
but are not limited to the following:
• Maintain close communication and coordination with all P/p team members
• Gain informal approvals from responsible technical leads
• Gain approvals by PMs
• Maintain continual Schedule Assessment and Analysis (to ensure proper forecasting, progress,
and changes are incorporated)
In-house, contractor, and external partner schedules must be continually monitored to assure successful
P/p performance. To do so requires the in-house entities (e.g., sub-projects), contractors, and/or
147Agency Policy Guidance to Enhance Earned Value Management (EVM) and Create a Schedule Repository. June 4, 2019.
https://community.max.gov/display/NASA/Schedule+Community+of+Practice
285
external partners to routinely and electronically submit their fully integrated schedules to NASA’s P/S in
its native file format (e.g., MS Project, Primavera, etc.). Having access to the respective IMSs in their
native file formats makes it possible for the P/S to integrate the information into the P/p Schedule
Database to effectively monitor and evaluate, at any level of detail, the quality and integrity of its task
sequencing, projected dates, critical and driving paths, assigned constraints, resources, coding,
structure, and current status. For example, on contracts, a P/p-level IPMR is typically provided on a
monthly basis to provide, technical, schedule, and cost status information. The purpose of the IPMR is
to provide early identification of problems that may have significant cost, schedule, and/or technical
impacts and report the effects of management actions and P/p status information for use in making and
validating management decisions. P/ps integrate contract IPMR, in-house, and other data to produce a
P/p-level IPMR. As P/p work is accomplished and task/milestone forecast start/finish dates move earlier
or later, it is important to monitor and keep the P/p management team informed of changes to need
dates for various hardware milestones and development efforts. Close communication of this
information between P/p team members, the Acquisition and Contract Management function, and
associated vendors will help ensure parts and material are available when needed.
Performance measurements are made using a selected set of performance metrics as specified in the
SMP. The metrics are divided into two classes, deterministic and stochastic. The deterministic methods
measure the current performance to the plan using a number of different techniques that evaluate the
variances between the baseline and current schedule dates and durations. Those techniques can be
used to estimate future performance but are limited to the knowledge provided by past performance.
Stochastic measurements are predictions of future performance using a Monte Carlo simulation which
models the impacts of risks and uncertainties. Performance measurements are routinely reported to
the P/p management staff through the Communication sub-function discussed in Chapter 8. For P/ps
with contracts, tracking measurements are collected per the DRDs as described the Schedule
Management and Control Plan. Performance measurements and trend monitoring of this type should
be made available to the PM and Technical Leads for consideration and/or correction. This approach
enables the total integrated P/p team to more effectively identify potential schedule risks in a timely
manner and to select the best strategies for mitigation.
While performance can be measured for all activities in the schedule, changes to activity and milestone
dates will not require approval through the formal baseline change control process unless their
movement impacts a task or milestone that is baseline controlled. The formal control of the schedule
baseline is limited to control of approved activity and milestone types identified in Procedure 1.
However, it should be understood that in carrying out Schedule Maintenance there may be issues and
conflicts that are identified that will precipitate the need for a formal Baseline Change Request (BCR). In
these situations, a BCR will be initiated and processed through a Change Control Board (CCB) in a
manner as outlined in Procedure 4.
The nature of this control procedure can be standardized and tailored for each P/p, depending upon the
size and complexity of the IMS and the P/p’s needs. Implementing IMS version control during the
update process is a simple technique for ensuring that the P/p team and stakeholders are using the
latest schedule information. Practical control of current schedule data can be enhanced through the use
of incremental schedule versions and/or release dates, and also by keeping copies of prior schedule
versions.
286
7.3.3.1 Step 1. Measure Deterministic Performance and Monitor Trends
It is a best practice for schedule metrics (both EVM-based and non-EVM-based) to be used to measure
deterministic schedule performance and monitor trends. It is imperative that P/p teams establish
sound performance evaluation practices from the very start of implementation. The P/p management
team needs as much meaningful and credible performance information as possible to help keep the P/p
on track in order to meet planned objectives and commitments. It is a recommended practice that
periodic performance trend analyses be executed on IMS data as depicted in the examples shown
below.
The following sections introduce a set of schedule performance metrics intended to assist the P/p
management in using the IMS to make sound programmatic decisions. These metrics include both EVM
and non-EVM based metrics, such as:
• Activity/Milestone Variances and Schedule Variances (SV)
• Activity/Milestone Performance Trends
• Baseline Execution Index (BEI), Current Execution Index (CEI), and Hit or Miss Index (HMI)
• Schedule Performance Index (SPI), Time-based Schedule Performance Index (SPIt) and Earned
Schedule (ES)
• Critical Path Length Index (CPLI)
• Margin and Float (Slack) Erosion
This list of metrics is a representative example and not an exhaustive list of schedule execution metrics
available to the P/p. The metrics and the thresholds for action are selected and defined by the P/p
during the Schedule Management Planning sub-function and documented in the SMP. Overall, this
handbook advocates the use of the IMS primarily as a management tool versus a reporting tool.
It is a recommended practice to use EVM-like measurements for all P/p, regardless of LCC value,
whenever possible. EVM techniques will likely require the designation of specific milestones within the
baseline IMS as “EVM milestones” to which a portion of the PMB costs will be assigned, as described in
Section 5.5.2. The PMB consists of the budget or budgeted cost for work scheduled (BCWS) for all
control accounts. For in-house P/ps, or P/ps with major elements of in-house work, the schedule
baseline supports the PMB. However, for many of NASA P/ps, the P/p’s prime and non-prime
contractors will have contractual EVM requirements. Therefore, some reconciliation between the P/p-
level IMS and the contractors’ IMSs will be necessary.
For P/ps requiring EVM, the complete set of EVM milestones comprises the PMB from an IMS
perspective. The P/S, EVM Analysts, Resource Analysts, and Technical Leads should work together
during Schedule Planning to define how the BCWS will be time-phased and how the earned value, or the
budgeted cost of work performed (BCWP), will be taken. The exact approach for determining BCWS and
taking BCWP may vary among P/ps based upon which Performance Measurement Technique (PMT) is
utilized. Generally, the performance measurement of work packages is derived directly from the
objectively determined status of the time-phased tasks/milestones composing the work packages. Work
packages containing deliverable products, or work associated with deliverable products, are deemed
287
“discrete effort.” Discrete effort work packages are assigned an appropriate PMT considering duration,
value, and nature of the effort. The same PMT method used for planning purposes and documented in
the SMP should also be used for claiming earned value. Future activities, requiring further definition,
are assigned to planning packages and are reflected in the IMS at a summary level of detail. As planning
package tasks reach the near-term window, they are divided into discrete work packages, and assigned
appropriate PMTs, prior to beginning work. It is important to note that LOE activities, which represent
only support efforts (e.g., P/p management, administration, safety), generally have no discrete products
that are produced making the quantification of accomplishment difficult or impossible. Examples of
PMT’s include the following:
• Weighted milestone – significant events represented by a milestone assigned a percentage of
the total value of the task/activity.
• 50/50 and 0/100 – used for short duration effort that are planned to complete within one to
two reporting periods.
• Percent complete – is either an objective (e.g., based on physical quantities) or subjective
(personal judgment) determination of the percent of the task/activity that has been completed.
It is strongly recommended that as each task percent complete is determined and incorporated
that the task’s remaining duration is also determined and accurately reflected in the IMS.
Note: When updating an in-progress schedule task, it is the remaining duration that becomes
the determining factor in reflecting the task’s accurate forecasted completion date.
• Apportioned – is determined to be the same percent complete as the related task or tasks (e.g.,
safety/quality inspector support for fabrication of hardware).
PMTs are individually selected for each work package to enable the most accurate evaluation of
performance possible. Future activities, requiring further definition, are assigned to planning packages
and are reflected in the IMS at a summary level of detail. As planning package tasks reach the near-term
window, they are divided into discrete work packages, and assigned appropriate PMT, prior to beginning
work.
Caution. Schedule execution techniques measure schedule performance in different ways, using
different assumptions and data sets. Using a single metric or a small set of metrics run the risk of the
P/p drawing false conclusions. To mitigate this risk, the P/p should utilize a suite of complimentary
schedule metrics to corroborate potential schedule execution concerns. Avoid manipulating the IMS
with the intent of producing favorable metrics for reporting purposes as this severely impacts the value
of the IMS as a management tool. The objective of using schedule execution metrics is to identify
potential issues, propose and implement solutions, and assess the effectiveness of those solutions and
not to simply have a report card. It is also important to note that frequent baseline changes (i.e.
replanning, re-baselining) can also alter the effectiveness of the IMS as a management tool, as it may
give false representations about how the P/p is truly executing to their plan.
288
deviate from the schedule baseline. It is important to routinely compare the planned/baseline and
current schedules to identify, measure, and monitor significant variances associated with key milestone
events, as well as critical path and near-critical path activities. LOE activities are measured with the
passage of time using a percent complete measurement technique based on the baseline duration of
the task. Due to the nature of LOE tasks, they should never reflect a schedule variance. Understanding
schedule execution performance for a given timeframe in the schedule may provide added insight, such
as looking at the schedule variance metrics by year, quarter, month, or even phase. Schedule metrics
that help to measure activity and milestone variances are identified in the table in Figure 7-9.
Figure 7-9. The table provides a list of common schedule performance metrics that measure schedule variance.
289
Description. In general, schedule variance is a calculation of the difference between the baseline
expected progress-to-date and the actual progress of an activity. Utilizing tools such as Acumen Fuse,
schedule variance can be calculated for activities that are planned, in-progress, or complete.148
For P/ps requiring EVM, Schedule Variance (SV) is a standard EVM measurement. SV shows how the
schedule is performing against the time-phased, budgeted PMB. In other words, it shows an ahead of
schedule, on schedule, or behind schedule situation. SV measures the difference between the value of
work accomplished and the value of work planned.
𝑆𝑉 = 𝐵𝐶𝑊𝑃 − 𝐵𝐶𝑊𝑆
Where:
• BCWP is the Budgeted Cost of Work Performed to Time-Now
• BCWS is the Budgeted Cost of Work Scheduled to Time-Now
Figure 7-10. An example stoplight chart showing the interpretation of the SV.
SVt is a time-based measure of whether the P/p is ahead of or behind plan. It is analogous to the cost
indicator for Cost Variance (CV), as both are referenced to “actuals”.
𝑆𝑉𝑡 = 𝐸𝑆 − 𝐴𝑇
Where:
• ES is the Earned Schedule calculated by projecting BCWP onto BCWS and measuring
the time units
• AT is the Actual Time (Time-Now status date)
Figure 7-11. An example stoplight chart showing the interpretation of the SVt.
The SV and SVt are often presented with a trend plot as shown in Figure 7-12Figure 7-17. At “Time
Now”, $80 worth of work was planned to be completed, but only $60 worth of work was actually
completed, which results in a -$20 value for the SV. Also, at “Time Now”, 15 months have passed but
148NASA has an Agency license agreement for Acumen Fuse, which is available on the NASA Software Center application.
Instructions for installing Acumen Fuse can be found on the NASA SCoPe website, https://community.max.gov/x/9rjRYg.
290
the value of work accomplished is only equal to 12 months of planned work, resulting in a -3-month
value for the SVt.
Figure 7-12. This figure illustrates an example of the SV and SVt calculations.
Thresholds. Even before the schedule baseline has been set, the P/p can track schedule performance.
Schedule thresholds should be established to aid in identifying and focusing on those variances that
should be monitored and managed. Thresholds help to bound schedule variance questions, such as:
• How many days has the task/milestone changed from the schedule baseline?
• How many days of total slack are associated with those variances?
• Is the variance rationale from the responsible Technical Lead accurate and reasonable?
For some metrics, thresholds may vary due to the type, length, and complexity of the P/p being
implemented. For other metrics, simple guidelines may exist. For instance, when using either the SV or
SVt metric, a negative value is referred to as an unfavorable variance because the value of work planned
is greater than the value of the work performed. Conversely, a positive value is referred to as a
favorable variance because the value of work accomplished was greater than the value of work planned.
SV is calculated in terms of dollars, whereas SVt is calculated in terms of time.
Thresholds agreed upon and established by the P/p management team should be documented in the
SMP along with the added requirement to provide appropriate variance rationale from the responsible
Technical Lead or CAM. A typical example of a schedule variance report is shown below in Figure 7-13.
291
Figure 7-13. Example of a schedule variance report.
It is important to understand why variances occurred (i.e., the root cause) and what the impacts are to
P/p completion so that appropriate corrective action can be planned. This involves asking a series of
questions related to the performance metrics that show variances:
• What caused the schedule variance? (fix the problem before additional time is lost)
• Does the schedule variance involve tasks that are on the critical path or secondary paths close to
being critical? (if so, every day slip means a day slip to the end date, or very close to moving the
end date)
• Can the work process or task sequence be modified to enable gaining the lost time back?
(identify any work that can be done in parallel instead of series)
• Are inadequate resources causing the variance? (make sure the right number and right skills are
working the job)
• Are additional work shifts needed to make up lost time? (if variance is not on the CP (or close),
then it may not be practical or cost effective to add more shifts)
Uncontrolled schedule variances may impact the schedule in any one of the following ways:
• May cause a slip to the P/p completion
• May cause a new primary critical path or a near secondary path
• May result in the need for resources to be adjusted
• May result in the need for a work around plan to be developed
• May cause conflicts in facility usage
• May impact internal handoffs between departments, or external deliveries to P/p partners
Caveat. As the P/p nears completion, SV and SVt converge to 0.0, gradually losing their value as a
performance indicator. Hence, other performance metrics need to be included in routine Schedule
Control.
292
7.3.3.1.2 Activity/Milestone Performance Trends
Description Task/Milestone Completion Rates. Figure 7-14 reflects analysis data that compares
monthly average performance rates for accomplishing tasks in the past to the average quantity of
planned tasks required in future months to stay on schedule.
Figure 7-14. The example shows schedule milestone finishes over time.
Thresholds. Thresholds are predetermined and documented in the SMP. Caution should be taken if this
analysis reflects an unrealistic bow-wave of tasks scheduled to occur with higher required completion
rates than the P/p has been able to accomplish previously. This situation is indicative of an unrealistic
schedule. In this type of trend analysis, if the completion rates projected for tasks scheduled for the
next six months are much higher than actual completions accomplished during the past six months, a
closer look should be taken at the type of tasks that are scheduled, to evaluate the need for replanning
in order to keep the schedule realistic.
293
Figure 7-15. The example shows Cumulative Baseline Activity/Milestone Finishes, Actual Activity/Milestone Finishes,
and Forecasted Activity/Milestone Finishes.
Thresholds. Thresholds are predetermined and documented in the SMP. If the cumulative baseline and
cumulative forecast lines do not match, it is an indication that not all activities have baseline dates. The
Cumulative Actual Finishes line is above the Cumulative Baseline line, it means that schedule
performance is ahead of plan; if under, it means that schedule performance is behind plan.
7.3.3.1.3 Baseline Execution Index, Current Execution Index, and Hit or Miss Index
Description BEI. The Baseline Execution Index (BEI) answers the question: “Is work getting done?” BEI
is simply the ratio of activities actually completed to activities planned to be completed. In other words,
the BEI us an actual-to-baseline comparison. The BEI value indicates the current cumulative
performance for how well the P/p has actually accomplished baseline tasks during the months of actual
implementation. Typically, LOE activities and milestones are excluded from the calculation. LOE
activities tend to have long durations as they are support activities for the entire P/p and would skew
the BEI calculation. Milestones are excluded since they are not actual work activities. Figure 7-16 shows
the BEI calculation and a standard stoplight chart to help interpret the BEI values.
𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑖𝑒𝑠 𝐴𝑐𝑡𝑢𝑎𝑙𝑙𝑦 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑
𝐵𝐸𝐼 =
𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑖𝑒𝑠 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑡𝑜 𝑏𝑒 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑
294
Figure 7-16. An example stoplight chart showing the interpretation of the BEI values.
The BEI is often presented with a trend plot as shown in Figure 7-17. At “Time Now”, 80 activities were
planned to be completed, but only 60 were actually completed and results in a 0.75 value for the BEI.
100
80
# Activities
60
20
0
Time
Figure 7-18. This figure illustrates an example BEI trend chart for a NASA project.
295
Description CEI. Current Execution Index (CEI) provides an alternate technique for monitoring schedule
performance. CEI answers the question: “Is work getting done in the forecasted time frame?” CEI
compares forecast dates from one status period to the next. The goal of the CEI metric is to determine
or measure how well the near-term schedule represents what actually takes place through execution. It
represents the fidelity of the forecast schedule and a P/p’s ability to execute tasks as projected each
month. The CEI is designed to encourage a forward-looking perspective on the schedule. This results in
a more accurate predictive model and increases the P/p’s ability to meet its obligations on schedule.
The CEI reflects an index value, which is determined by dividing the number of tasks/milestones that
actually finished during the current reporting period by the number of tasks that were forecasted to
finish during the reporting period. The CEI is similar to the BEI except that a near term window is set by
the P/p for the calculation; thus, CEI is not an actual-to-baseline comparison, but an actual-to-forecast
comparison. This is done to avoid the dilution that is caused by performance early in the P/p life cycle.
Often corrective actions will be initiated to correct poor performance and it is important for the P/p to
examine the effect of the corrective action without the dilution caused by the early performance. On
the other hand, if no action was taken, the P/p needs to measure the current performance to determine
performance trend.
𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑖𝑒𝑠 𝐴𝑐𝑡𝑢𝑎𝑙𝑙𝑦 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑊𝑖𝑛𝑑𝑜𝑤
𝐶𝐸𝐼 =
𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑖𝑒𝑠 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑓𝑜𝑟 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝑖𝑛 𝑡ℎ𝑒 𝑊𝑖𝑛𝑑𝑜𝑤
In the equation, “Window” is defined by the P/p to be the relevant time frame to be used for the
calculation. It is important to note that the value of CEI cannot be over 1.
100
Window
80
# Activities
60
Time Now
20
0
Time
Figure 7-19. The figure shows the Current Execution Index is indicating a downward trend of deteriorating performance.
296
Figure 7-20. This figure illustrates an example CEI trend chart for a NASA project.
It is a good practice to use this technique along with the BEI when possible so that management can
have insight into not only how well the P/p is performing against the baseline plan, but also how
accurately the team is able to forecast their projected work from one period to the next. In Figure 7-19,
above, the BEI was calculated to be 0.75 however the CEI is showing 0.45 indicating a worsening
performance trend. In this case, the P/p should take action to correct the performance. However,
caution is urged depending on the width of the window. In some cases, there may be a bow wave of
completions just beyond the “Time Now” window boundary. In other words, it is possible to pick a
window such that a number of completions are planned near the closing of the window and just a few
days late would move them beyond the window and cause an exaggerated indication of poor
performance.
Description HMI. HMI answers the question: “Is the right work getting done?” The HMI measures the
number of baseline tasks completed early or on time to the number of tasks with a baseline finish within
a given month. It is used to inform the P/p management the ratio of activities actually completed in a
monthly reporting period to the number of activities planned for that month. Similar to CEI, the value of
HMI cannot be over 1.
𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑖𝑒𝑠 𝐴𝑐𝑡𝑢𝑎𝑙𝑙𝑦 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑡ℎ𝑖𝑠 𝑀𝑜𝑛𝑡ℎ
𝐻𝑀𝐼 = = 𝐶𝐸𝐼 (𝑊𝑖𝑛𝑑𝑜𝑤 = 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑀𝑜𝑛𝑡ℎ)
𝐴𝑐𝑡𝑖𝑣𝑖𝑡𝑖𝑒𝑠 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑡𝑜 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒 𝑡ℎ𝑖𝑠 𝑀𝑜𝑛𝑡ℎ
An example of trend charts showing the BEI, CEI, and HMI metrics is shown in Figure 7-21.
297
Figure 7-21. This figure illustrates an example HMI trend chart for a NASA project.
Thresholds. A typical example of BEI, CEI and HMI monthly reports ad thresholds is shown in Figure
7-22. 149 Blue indicates excellent performance; no further oversight penetration is required. For green
stoplight a typical action may be to research the P/p for efficiencies to save money. In the yellow
stoplight, the prescribed action may be “Watch” and direct the activity owners to seek improvements
internally. The red stoplight will require corrective actions formally tracked by the P/p. Whatever the
prescribed actions, they were determined during the Schedule Planning Process and are documented in
the SMP.
Figure 7-22. An example set of stoplight charts showing the interpretation of the BEI, CEI and HMI values.
149Shinn, S. and W. Majerowicz. “A Metrics-Based Approach to Enhancing Schedule Performance Insight.” NASA Cost
Symposium 2016. Slide 25. https://www.nasa.gov/sites/default/files/atoms/files/40_metrics-
based_approach_to_enhancing_schedule_performance_insight_shinn_and_majerowicz_10aug_16v2_tagged.pdf
298
Trending of CEI and HMI over time can provide early warning signs:
• Unstable Baseline. An HMI trend ≤ .40 is an early warning sign of an unsustainable baseline.
Once an unfavorable HMI trend of .40 or less is established, float (or margin) erosion and
baseline completion delays are likely.
• “Bow Wave”. A CEI trend ≤ .55 is an early warning sign of a “Bow Wave”. A sustained .55 or
less CEI trend indicates a bow wave of unfinished work that will erode the remaining margin
and threaten P/p completion.
Caveat. The BEI and the CEI do not distinguish between activities, they only count completions. Thus,
the BEI and CEI may not indicate whether the “right" work is getting done. There is no adjustment for
simple, low-cost activities versus complex, high-cost activities. The P/p management staff will need to
be careful about taking corrective actions when a large number of low-cost activities may have
disproportionally diluted the BEI and/or CEI. The HMI can be very misleading should a number of
activities finish on the first day of the subsequent month. In addition, high latency baseline control
practices distort BEI and HMI.
7.3.3.1.4 Schedule Performance Index (SPI), Time-based SPI (SPIt), and Earned Schedule (ES)
Description SPI, SPIt and ES. In general, a schedule performance index is a ratio of the performance of
an activity relative to the baseline. Utilizing tools such as Acumen Fuse, the schedule performance index
can be calculated for activities that are planned, in-progress, or complete.150
Schedule Performance Index (SPI) is a standard EVM measurement. The SPI measures the budgeted
cost of work performed up to the “Time Now” date, divided by the budgeted cost of work scheduled to
be performed. It provides the ratio of P/p dollars earned-to-date to the dollars the P/p should have
earned. Thus, SPI is a measure of schedule in terms of cost, or in other words, unspent time-phased
money. Therefore, it is often useful to convert to a time-based measurement, such as SPIt. This is done
by calculating the Earned Schedule (ES), which is the time that equates the budgeted cost of work
performed (BCWP) to the budgeted cost of work scheduled (BCWS), and the dividing the ES by the
actual P/p duration up to “Time Now” (AT). It is analogous to the cost indicator for Cost Performance
Index (CPI), as both are referenced to “actuals”. It provides the ratio of the P/p time earned-to-date to
the time the P/p should have earned. 151 These measurements are illustrated in Figure 7-23.
150 NASA has an Agency license agreement for Acumen Fuse, which is available on the NASA Software Center application.
Instructions for installing Acumen Fuse can be found on the NASA SCoPe website, https://community.max.gov/x/9rjRYg.
151 Additional information on schedule-based EVM metrics can be found in a presentation by Andrea K. Gilstrap, PMP, EVP,
entitled, “Earned Schedule in Empower.” Empower User’s Group. August 28, 2019.
299
Schedule Variance = Cost of Work Not Performed
100
80
60
Cost, $
Time Now
20
Earned Schedule, ES
0
Te Tn Time
12 Mos 15 Mos
Figure 7-23. The figure illustrates SPI, SPIt, and Earned Schedule.
𝐵𝐶𝑊𝑃
𝑆𝑃𝐼 =
𝐵𝐶𝑊𝑆
𝐸𝑎𝑟𝑛𝑒𝑑 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒
SPIt =
𝐴𝑐𝑡𝑢𝑎𝑙 𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛
Where:
• BCWP is the Budgeted Cost of Work Performed to Time-Now
• BCWS is the Budgeted Cost of Work Scheduled to Time-Now
• Earned Schedule (ES) is the schedule date where the BCWP equals the BCWS
• Actual Time (AT) is the schedule duration up to Time-Now
Figure 7-24. An example Stoplight Chart showing the interpretation of the SPI.
Evaluating cost-based EVM measurements alongside schedule-based metrics often provides additional
insight as to P/p progress. For instance, using Total Float (TF) values (also called Total Slack) and
Schedule Performance Index (SPI) values provides a comprehensive performance view of the status of a
P/p from an integrated cost and schedule perspective, as shown in Figure 7-25.
300
Figure 7-25. The relationship between SPI and Total Float.
In addition, when cost and schedule integration is sound, BEI and the cumulative SPI (SPIcum) should
trend in a similar pattern, as shown in Figure 7-26.
Figure 7-27 shows an example of a NASA project, where the BEI and SPIcum were inconsistent in their
trending. The SPI indicated good performance (green), while the BEI indicated deterioration (black).
Retroactive BCRs added already completed work to the baseline (red), but performance erosion
continued as measured by the BEI. The project eventually slipped its launch date.152
152Shinn, S. and W. Majerowicz. “A Metrics-Based Approach to Enhancing Schedule Performance Insight.” NASA Cost
Symposium 2016. Slide 36. https://www.nasa.gov/sites/default/files/atoms/files/40_metrics-
based_approach_to_enhancing_schedule_performance_insight_shinn_and_majerowicz_10aug_16v2_tagged.pdf
301
Figure 7-27. An example showing inconsistent BEI and SPI trending.
Thresholds. For the P/ps required to perform EVM, those thresholds are established by P/p
management at the WBS element or control account level and should be captured in the P/p Plan
and/or Technical, Cost, and Schedule Control Plan.153 SPIs greater than ~ 1.1 – 1.2 indicate poor
planning and the P/p should have prescribed actions in the SMP to search for efficiencies to reduce costs
or to contract the schedule.
Caveat. SPI is not a precise measure of schedule variance in days but is rather a measure of unspent
money. When the P/p completes, the SPI will equal 1.0 regardless of actual completion time because all
money will be spent. Likewise, as the P/p nears the end of the scheduled duration, the percent of total
money spent is large and begins to dilute the SPI to a point where it is no longer a useful indicator of
schedule performance. It is a recommended practice that when the P/p has spent 2/3 of the P/p
budget, greater emphasis needs to be placed on other Schedule Performance Measures. Earned
Schedule describes schedule variance in time units and typically retains utility until P/p completion and
does not automatically converge towards 1.0 near P/p completion.
Figure 7-28. An example Stoplight Chart showing the interpretation of the CPLI values.
The CPLI is often plotted over time to show trends. Trend charts are useful in giving the P/p
management team early warnings as shown in Figure 7-29. Clearly in the February-April time-frame, the
trend chart is showing urgent corrective action is going to be needed.
CPLI
1.1
Project likely to complete on-time even with
minor schedule performance degradations
1.05
Project will need near perfect, or improved,
performance to complete on-time
1
0.95
0.9
0.8
0.75
0.7
N D J F M A M J J A S O N D
Thresholds. The P/p has established thresholds for prescribed actions in the SMP for all selected
Schedule Performance Measures. In the examples in the figures above, the P/p has established color
bands for the prescribed actions. For green stoplight or the green shaded area in the Trend Chart a
typical action may be to research the P/p for efficiencies to save money. In the yellow coded areas, the
prescribed action may be “Watch” and direct the activity owners to seek improvements internally. The
red coded areas will require corrective actions formally tracked by the P/p. Whatever the prescribed
actions, they were determined during the Schedule Management Planning Process and are documented
in the SMP.
303
Caveat. CPLI is a “global” measure and does not provide details needed to actually target the individual
tasks that may be causing the problems. Also note that the CPLI is only measures performance along
the critical path and cannot show other paths that may be failing and soon to be on the critical path.
Hence, other performance metrics need to be included in the monthly reviews.
Margin is planned according to the guidelines and techniques specified in the SMP and established
within the schedule during Schedule Development, as described in Section 5.5.11. It is likely that a
relatively small number of activities or groups of activities (e.g., a subsystem) will be determined by the
P/p to be critical enough that the schedule should be protected by the assignment of a margin activity.
On the other hand, float, which is number of days an activity can slip before it hits the critical path, is
not assigned but is calculated by the scheduling tool. Every activity will have a float calculation.
Schedule margin is a control parameter owned by the PM and can only be released through a formal
process. The amount of schedule margin needed is greater early in the P/p and decreases with time as
P/p uncertainty decreases. Because of that decreasing need, P/ps will create a margin burndown curve
and track against that burndown curve. This control process is typically performed against the
deterministic schedule, tracking month-end margin at each schedule update.
Being able to identify where margin activities are housed in the schedule are critical elements in tracking
margin. In other words, margin activities should be clearly identifiable when included within the P/p
schedule. It is also important to understand the difference between available margin days and other
non-working calendar days when calculating “effective margin”. Effective margin is defined as margin
on the critical path and can be determined by zeroing out the margin tasks and calculating the number
of days that the P/p end-item date or finish milestone moves to the left. Note: The cumulative margin
may not equal the sum of the individual margin durations.
Margin should be allocated according to working days in the schedule. It is worth noting the distinction
between the available “working days” (i.e., margin days) and any “non-working” days, such as weekends
or holidays (i.e., contingency days). Whereas margin days are defined as “working days” available in a
schedule (where the P/p does not already have work defined/planned) to mitigate or absorb schedule
risk, contingency days or “non-working days” should only be used as a resource to the PM to recoup
delays due to poor schedule performance so as to not utilize schedule margin. P/p management should
keep in mind that contingency days will likely cost the P/p more money than using available float, but if
there is no float available, contingency days should be the next option prior to the consideration of
margin.
A tracking issue may arise when a P/p distributes the margin and includes weekends and/or holidays
into its margin totals (e.g., if the margin activities are initially tied to non-working days and are later
reallocated to risk changes, the P/p’s effective margin totals could change without any adjustment to
the actual duration on the margin tasks, which would create a margin tracking problem). If the P/p
monitors schedule margin in working days and contingency days separately, the P/p management will
304
have a better understanding of the time resources available to the P/p at any given point in time. Figure
7-30 illustrates how a P/p can track schedule margin (and contingency) depletion/erosion (and
restoration) over time against the planned depletion.
Figure 7-30. The figure illustrates an example of a tracking margin and contingency days as they relate to the total number of
calendar days.
The chart in Figure 7-30 distinguishes between margin (i.e., working days) as “W”, contingency days
(weekends or holidays) as “Contingency”, and calendar days as “C.” The solid red (margin) and green
bars (contingency) are stacked to show the number of days available at a particular month’s start that
are not planned work days. The crosshatched red (margin) and green (contingency) bars show the
actual month-end days available. The point at the top of the bars showing the total number of month-
end days available (i.e., margin + contingency), while the percentage shows the total days (i.e., margin +
contingency) available as a percentage of the days-to-go in the schedule. The lower blue line indicates
the planned margin burndown; whereas the upper blue line indicates the actual month-end margin
remaining over time.
When tracking margin, it is helpful to understand that margin can be consumed by risks and
uncertainties in several ways: risks become realized and slow down current work or require additional,
new work; risk mitigations are developed and incorporated into the P/p plan/schedule; or uncertainties
become realized, slow down current work or require additional, new work. For each of these instances
the margin is essentially allocated, either as extended task durations, as mitigation activities, or as new
tasks, thereby reducing the “effective margin” by an amount equal to the overall impact that would be
incurred at the point of the margin task. For any zero-float paths, this may be an amount equal to the
duration of the wait time, new work, or mitigation activities. For any non-zero-float paths, the margin
305
may not need to be reduced by an amount equal to total duration of the wait time, new work, or
mitigation activities, as there may be positive float on the path that will absorb some of the
risk/uncertainty impact.
Keeping a record of how margin is utilized is an important aspect of understanding and communicating
margin consumption. As margin is consumed or reallocated and critical paths change, “non-effective”
margin activities that were once on less-critical paths may end up on the new, primary critical path, now
as “effective margin” activities. It is important to recalculate the total amount of effective margin as
these changes occur. It is a recommended practice to maintain a Margin Log indicating the changes in
schedule margin and the reason for margin consumption.
Depending on the where the P/p is in its life cycle, margin and float erosion tracking may need to be
performed at different levels (e.g., P/p level vs. subsystem level) for adequate insight into schedule
performance. A few specific examples of margin and float erosion tracking follow.
306
Figure 7-31. The figure shows an example of margin tracking for subsystems (SS) being delivered into I&T.
307
Figure 7-32. Example of a Margin Trend Report and Margin Log for the critical path showing the trends from the monthly
reports.
308
Margin Available During I&T and Launch Operations
50
Current Value
45 Watch
Corrective Action
40 Watch
35
30
25
20
15
10
0
I&T Start Integ Comp Env Comp Pack & Ship Veh Int Comp Launch
I&T and LO Milestone
Launch Veh Int Comp Pack & Ship Env Comp Integ Comp I&T Start
Figure 7-33. The figure shows an example of margin tracking for I&T and Launch Operations
Thresholds. The P/p sets the thresholds during the Schedule Management Planning Process and
documents them in the SMP. Once the P/p is in Implementation, the expenditure of schedule margin is
tracked and reported at regularly scheduled P/p reviews. Deviations from the guidelines trigger a
requirement for either an explanation about why the deviation is acceptable or for the initiation of
activities to mitigate the trend. For these example cases, thresholds are selected to set a “Watch” flag
when the margin is threatened, and corrective action is required when significant margin is consumed.
7.3.3.1.7 Float Erosion, Total Float Consumption Index (TFCI), Predicted Critical Path Total Float (CPTF)
Description Float Erosion. An important key to achieving the desired schedule completion date is being
able to identify and evaluate what tasks are directly driving the P/p end date. Total float provides this
capability and knowing the total float for every task and milestone in the schedule will provide
management with the necessary insight into how each task impacts the P/p end date. The Float Erosion
metric is presented in tabular form showing float by WBS. It is typically reviewed monthly.
A Float Erosion table, as shown in Figure 7-34, is helpful in that provides performance measurement at
lower levels in the IMS and more likely to catch performance issues before the margin activity is
threatened. Total float for the “Time Now” date is shown for each WBS item in the “Current Float”
column. The P/p can track the data down to the lowest level in the IMS if necessary. In the figure, the
structures subsystem is decomposed to the next level WBS in order to bring P/p management attention
to the erosion of total float for the Pyros. Management will likely request that the Avionics subsystem
be further decomposed at the next review period so that the stressed component can be identified and
targeted for corrective action.
309
Activity Comment Plan Date Need Date Forecast Date Plan Float Current Float Stoplight
C&DH None 6/1/20 7/1/20 6/15/20 30 16
Avionics Attitude controllers failed qual tests, design change pending 5/15/20 7/1/20 6/28/20 47 3
Structures
Primary Structure Late delivery of sidewall panels 6/1/20 6/15/20 6/17/20 14 -2
Secondary Structure None 6/20/20 7/1/20 6/25/20 11 6
Mechanisms Early delivery of hatches. 7/1/20 7/5/20 6/20/20 4 15
Pyros GUIDEP notice of bad lots, batch testing underway 7/1/20 7/10/20 7/15/20 9 -5
Thermal Control None 7/10/20 7/20/20 7/10/20 10 10
---- ---- ---- ---- ---- ----
---- ---- ---- ---- ---- ----
Total float is often presented as a trend chart by WBS. This is helpful in that the slope of the trend line
can be an early warning indicator of the inability to perform according to schedule reducing the
schedule resiliency. A typical trend chart is shown in Figure 7-35.
Total Float
50
Attitude Controllers Failed Qual Tests,
Modified, retest
40
20
10
0
N D J F M A M J J A S O N D
-10
Month
Figure 7-35. The figure illustrates a Float Trend Report for tracking the total float for a subsystem.
Sometimes a P/p will have a pre-planned float erosion to which actual float is tracked. A sudden,
unexpected change in the slope may indicate that future milestones are threatened.
Figure 7-36 shows a typical stoplight chart for the Float Erosion metric with an interpretation and
actions required. P/ps with higher risks might want more cushion and may set the thresholds higher.
Figure 7-36. An example stoplight chart showing the interpretation of the Current Float values.
310
Similar to the way in which total float is tracked, free float is valuable in analyzing scenarios involving
schedule impacts and conflicts for a specific task/milestone or a set of tasks and also for prioritization of
resource utilization. It should generally not be used as a primary means of monitoring and schedule
analysis for the total P/p.
Thresholds. The P/p sets the thresholds during the Schedule Management Planning Process and
documents them in the SMP. For the example case, thresholds were selected to set a warning flag when
the float was between 5 and 10 days and corrective action is required when float is less than 5 days.
Caveat. As with most deterministic measures, total float is backwards-looking and does not consider
any future risks that may occur. In our example case, 3 days of positive float is available and that may
be sufficient if there is no exposure to a future risk. However, if the redesign approach relies on a new
technology, then 3 days may not be enough. In addition, if the P/p completion task/milestone is
assigned a fixed schedule constraint, then the critical path may reflect a negative float (slack) value. The
schedule may also calculate negative float if there is an out-of-sequence relationship between activities
(e.g. Activity B has started and has an FS predecessor link from Activity A, which has not yet finished).
This is an error in the schedule baseline and should therefore be corrected.
Description TFCI and CPTF. Another metric that takes into consideration what would happen if a
delinquent P/p continued at its current rate of total float erosion is the Total Float Consumption Index
(TFCI).154 TFCI calculates total float as an efficiency factor by applying the schedule’s average rate of
total float consumption to the remaining scope of work, thereby projecting a forecast finish date for the
entire P/p.
𝑃𝑟𝑜𝑗𝑒𝑐𝑡 𝐴𝑐𝑡𝑢𝑎𝑙 𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛 + 𝐶𝑟𝑖𝑡𝑖𝑐𝑎𝑙 𝑃𝑎𝑡ℎ 𝑇𝑜𝑡𝑎𝑙𝐹𝑙𝑜𝑎𝑡
𝑇𝐹𝐶𝐼 =
𝑃𝑟𝑜𝑗𝑒𝑐𝑡 𝐴𝑐𝑡𝑢𝑎𝑙 𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛
154
PASEG, Version 4.0. National Defense Industrial Association (NDIA), Integrated Program Management Division (IPMD).
March 9, 2016. Page 173.
311
Figure 7-37. This figure illustrates an example TFCI chart.
Figure 7-38. An example stoplight chart showing the interpretation of the TFCI values.
From the TFCI value, the amount of total float likely at the schedule baseline finish date, known as the
Predicted Critical Path Total Float (CPTF), can be calculated.
Figure 7-39. An example stoplight chart showing the interpretation of the Predicted CPTF values.
Then using the P/p calendar, including non-working days, add (if negative) or subtract (if positive) the
Predicted CPTF number of days to the baseline finish date to calculate a Forecast Finish Date.
312
Thresholds. The P/p sets the thresholds during the Schedule Management Planning Process and
documents them in the SMP. The P/p may choose to set tighter thresholds if the schedule has high
uncertainty.
Caveat. As with most deterministic measures, TFCI is based on past performance and does not consider
any future risks that may occur. TFCI should be used in conjunction with other risk-based techniques to
forecast future performance estimates. TFCI is meant to be calculated for delinquent P/ps only. Also,
the TFCI metric is sensitive to the Actual Duration (denominator) and should therefore not be used early
in the P/p life cycle.
This list is not exhaustive and different techniques have been used by other NASA P/ps.
The example shown in Figure 7-40 was developed for a NASA project. The left-most band is the range
from the 10th percentile to the 50th percentile, the middle band is the range from the 50th percentile to
the 70th percentile, and the right-most band is the range from the 70th percentile to the 90th percentile.
The chart shows the probability for the Planned Date and the probability for the Need Date, with the
difference representing the total float available. The chart shows where the P/p management needs to
313
focus attention and exercise controls. For example, Functional Test, Vibe, Therm-Vac, and Deliver have
virtually no chance of on-time completion and would require corrective action.
Probability Probability
10% 50% 70% 90% for Planned for Need
Date Date
Sub Sys 1 64% 72%
Sub Sys 2 76% 77%
Sub Sys 3 88% 88%
Sub Sys 4 100% 100%
Sub Sys 5 100% 100%
Sub Sys 6 75% 100%
Functional Test Plan Need 0% N/A
Vibe 0% N/A
Therm-Vac 0% N/A
Deliver 52% N/A
10/1/2013
11/1/2013
12/1/2013
10/1/2014
11/1/2014
12/1/2014
3/1/2013
4/1/2013
5/1/2013
6/1/2013
7/1/2013
8/1/2013
9/1/2013
1/1/2014
2/1/2014
3/1/2014
4/1/2014
5/1/2014
6/1/2014
7/1/2014
8/1/2014
9/1/2014
1/1/2015
2/1/2015
Figure 7-40. Example of a BandAid chart used in a recent NASA project.
Figure 7-41 below, is also from a NASA project and shows similar information but includes the available
margin to show the threat to project-held margin. Due to SBU requirements, the actual subsystems
have been renamed “SS-x”. The chart shows the current best estimate for planned completion as the
left-most edge of the band for each subsystem, then next follows slack to the edge of the project-held
margin in each case. The project is holding 30 days between early integration and late integration into
the I&T flow. Probability data from the SRA is indicated on the top of each band. To the right of each
band is the probability of missing the late integration date. It is marked with the red arrow. For
example, SS-2 shows a 79% probability of finishing on the early integration date, 90% probability of
finishing on or before the late integration date and 10% chance of being late and delaying integration.
314
Probability from SRA => 50% 74% 26%
4/3/18 4/3/18 5/15/18
Current Supports Supports
Ready 30d
SS-1 Date
Early
Integration
Reserve
Late
Integration
0d
Success Success
Slack
Figure 7-41. Probability of completion for critical items from a recent NASA project.
Thresholds. Although these projects had not established any prescribed thresholds, reasonable
threshold recommendations would be to “Watch” when less than 70% on-time completion and
Corrective Action required when less than 50% on-time completion. In the figure above, only SS-3
comes near the threshold for “Watch”, having a probability of 77% completion before it will begin to
delay I&T.
Caveat. The SRA results are very dependent on the quality of the inputs. The risk list and the IMS must
be current and quality checked. These requirements can induce latency in the data if the P/p is not
maintaining currency in the Maintenance sub-function.
315
decrease may not be linear, as new risks may emerge during the P/p requiring new mitigations to be
instituted.”155
To show the risk-based completion trend, the analysis is repeated at each pre-determined time using
the latest risk data and the latest IMS (or Analysis Schedule). The mean value is plotted along with
uncertainty bars indicating the range of the uncertainty of the output. This stochastic tracking
technique was used by a NASA project and is illustrated in Figure 7-43. The date of the analysis is
indicated on the vertical lines up to the data bars. This project repeated the analysis about every 6
months and at major reviews. The data bars show the mean and the 20th and 80th percentiles. The top
line labeled “Deterministic” shows the planned completion date, margin excluded, and the downward
trend as risks were realized. The lower line shows the trend of the stochastic results over time. The
upward trend shows the effectiveness of risk mitigation and the uncertainty bars decrease with
improved knowledge and retired risks. Everything looks good at the data point for “8/99;” after that
point in time, the project was hit with a low probability, high impact risk which dropped the prediction
down into the yellow banded area, but still with positive margin even at the 80th percentile at the lower
end of the uncertainty bar.
155NASA/SP-2011-3422, NASA Risk Management Handbook. Version 1.0. November 2011. Page 96.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120000033.pdf
316
Oct. ’00 - Baseline Plan,
Zero Margin
Nov. ’00 - Current Plan,
Zero Margin
Dec. ’00 - 12/7/00
Ready Early
Ready to Launch
Margin
Jan. ’01 -
Plan
Launch
Feb. ’01 - Period
Mar. ’01 -
Current Plan,
Risk-based
Apr. ’01 - Zero Margin, Corrective Action or
Margin
With Risks Retention Rationale
LEGEND May ’01 - Required
Missed Launch Period
Risk-based Jun. ’01 -
Launch Ready Current Plan,
PDR
3/99
8/99
9/98
& Confidence Zero Margin
3/00
CDR
Jul. ’01 -
–20%-tile
Figure 7-43. Example of a Risk-based Trend Chart for completion date used in a NASA project.
Thresholds. The P/p sets the thresholds during the Schedule Management Planning Process and
documents them in the SMP. For the case discussed above in the figure, the project had selected
everything above the launch window as “green”, meaning no action required as long as the mean value
was above the yellow banded region. The yellow band marked the launch window and any time the
mean value was within that yellow band, the project action was “Watch”. When the mean value was in
the red-banded region, mitigation was required.
Caveat. The SRA results are very dependent on the quality of the inputs. The risk list and the IMS must
be current and quality checked. These requirements can induce latency in the data if the project is not
maintaining currency in the Maintenance sub-function.
317
Float Plus Margin to Deliver to I&T
SS5
SS4
Subsystem Name
On time
SS3
SS2
SS1
0 20 40 60 80 100
Days Need Date SS3
Need Date SS2, 4 and 5
Need Date SS1 & 6
Plan Comp Float Margin
Figure 7-44. Using the SRA results to forecast the probability of subsystems (SS) completion for delivery to I&T.
Figure 7-44 is similar to Figure 7-31 with the exception of the probability bands added from the SRA.
Using the 70th percentile, some of the subsystems (SS) change their status. SS1 and SS2 have now
moved from “No Action” to “Watch”. SS6 has moved from “Watch” to “Corrective Action Required”.
Furthermore, SS5 is now showing a probability of exceeding its available margin, and SS3 shows that it is
very unlikely that any margin will be used.
Figure 7-45 shows an alternative method for showing the SRA results versus required margin. In this
technique, the S-curves are plotted for each time the periodic SRA is performed and are compared
against the current required margin. In this example case, Q1 and Q2 show no P/p action required. Q3
and Q4 indicate a “Watch” and “Corrective Action” respectively. This version of the tracking chart is
more common, and it displays more information by showing probability values where the S-curves
violate the margin boundaries. In this example case the “Watch” area is between the 70th and 50th
percentiles with no action required above and corrective action required below.
318
100
90
Sufficient
Confidence Level, % 80
70
60 Watch
50
40
Corrective
30
Q-1* Q-2 Q-3 Q-4 Action
20 Req’d.
10
Margin
0
On Dock, KSC Need Date
*Note: Indicates Quarter-1 Management Report
Figure 7-45. This figure shows a margin sufficiency tracking chart using the S-curves directly from the periodic SRA analyses.
Thresholds. The P/p sets the thresholds during the Schedule Management Planning Process and
documents them in the SMP. In this example case, the P/p has chosen the “Watch” threshold to be
where the total float is consumed, and the margin is the only cushion to delivery. The corrective action
threshold is set where the margin is 50% consumed.
30%
Legend MA
Q-3
Cost
Q-4
Q-2
Q-1*
Delivery date
Figure 7-46. This figure shows both cost and schedule values from an ICSRA plotted against the MA and ABC.
319
Thresholds. No thresholds are shown here, nor did the specific P/p prescribe any. An example might be
to define actions in specific quadrants surrounding the ICSRA plotted point and its uncertainties. Figure
7-47 illustrates the example.
Quad 4:
Quad 1: No Action
Sched Correction
Watch
Quad 3:
Cost & Sched Quad 2:
Correction Cost Correction
Figure 7-47. Example of recommended thresholds for risk-based tracking against the MA and ABC.
• Quad 1. If the MA or ABC is in Quad 1, no action is required with the exception of the small
rectangular box bounded by the 70th percentiles and the 50th percentile JCL point. This
rectangular region is labeled “Watch”. Within this region there is between 30 and 50 percent
chance that either the cost or schedule agreement will be violated.
• Quad 2. If the MA or ABC is in Quad 2, there is at least a 50 percent chance that the cost
agreement will be violated and corrective action for cost performance improvement will be
required. In this quadrant, the completion date agreement is at least 50 percent likely to be
met.
• Quad 3. If the MA or ABC is in Quad 3, there is at least a 50 percent chance that both the cost
and schedule agreements will be violated and corrective action for both cost and schedule
performance improvement will be required.
• Quad 4. If the MA or ABC is in Quad 4, there is at least a 50 percent chance that the schedule
agreement will be violated and corrective action for schedule performance improvement will be
required. In this quadrant, the cost agreement is at least 50 percent likely to be met.
Caveat. In addition to the previously stated caveats for the schedule risk analyses, the same
consideration needs to be observed here for the cost data as well. And since the current CPI will be
used to estimate the cost uncertainties, the EVM data must be of suitable quality and should be current.
320
Knowing past schedule performance can provide much insight into future expectations. The findings are
sorted into one of two categories as follows:
1. Corrective Action. It has been determined that a correction needs to be made to the baseline
schedule. Corrective actions beget BCRs. Those corrections are managed through the P/p’s
Change Control process, as described in Section 7.3.1 and documented through the P/p’s
CM/DM function.
2. Retention Rationale. In some cases, the requirements or the best practice cannot be met. In
those cases, if a corrective action is not taken, retention rationale must be documented along
with any potential impacts to the schedule.
Performance metrics and trends that reflect poor activity and milestone completion rates may be an
indicator that the P/p schedule is not realistic and corrective action is needed to control the schedule.
The P/p has documented them in the SMP predetermined thresholds for when corrective actions are
needed. This procedure utilizes a set of specific actions to be followed to bring schedule back into
compliance when the prescribed thresholds are exceeded, as shown in Figure 7-48.
Performance Measurement
Status Date
No
Cum Baseline Plan
Watch Threshold
Exceeded? 70% = 1/8/13
Distribution70%
for LRD/Finish
70% = 4/12/13
= 5/17/13
70% = 7/28/13
Oct. ’00 -
Ready to Launch
0.8
Ready Early
Correct
Margin
Jan. ’01 -
Plan
0.6
Launch
Feb. ’01 - Period
Action 1: Watch
Planned Launch
Date 11/1/2012
0.4 Mar. ’01 -
Risk-based
Risk #7 & #2 Current Plan,
Apr. ’01 - Zero Margin,
Margin
Mitigate
0.2 With Risks
May ’01 -
• Additional information
Missed Launch Period
0
Jun. ’01 -
11/2/12
11/30/12
12/28/12
1/25/13
2/22/13
3/22/13
4/19/13
5/17/13
6/14/13
7/12/13
8/9/13
9/6/13
10/4/13
11/1/13
PDR
CDR
3/99
8/99
9/98
Jul. ’01 -
3/00
Residual Risk – Accept
Action 3: Re-plan
• Re-estimate activities not started
• Add/delete activities
• Adjust start dates
• Initiate CR
Action 4: Re-baseline
• Validate the re-plan
• Change the performance
Yes baseline
Re-baseline?
• Perform JCL
• SRB or Peer Review
No • Update Decision Memo
• Initiate CR
Figure 7-48. Shown here are the possible actions taken when performance measurement flags a threshold exceedance.
321
Corrective Actions
The schedule baseline (and PMB) should remain stable and only be modified due to authorized changes
in work scope, reflected in authorized replans or authorized rebaselines. There are four specific
corrective actions that can be taken through the P/p’s change control process:
1. Watch. The PMB remains unchanged. Watch is always an action internal to the P/p. The P/p
may release margin to protect against a replan. The Technical Lead may be directed to manage
within available float. In such cases, additional risks may need to be added to the risk register.
2. Retain. Sometimes corrective actions are not feasible for reasons of complexity, cost, or other
extenuating circumstances. In those cases, a rationale for retention is written and no corrective
action is taken and the PMB remains unchanged.
3. Replan. A replan is internal to the P/p and occurs when there is a change in the original plan for
accomplishing the previously authorized scope, typically involving the redistribution of budget
for remaining work. The PM typically has the authority to replan within the approved MA, but
must obtain the approval of the proper Decision Authority if the MA needs to be changed. The
PMB, which is usually tied to the MA and P/p end-item or finish date (e.g., launch), may or may
not be changed as the result of a replan. The EVM performance measurements, if applicable,
are not reset unless the PMB changes.
4. Rebaseline. Rebaseline is generally external to the P/p. A rebaseline requires approval by the
proper Decision Authority, as well as external stakeholders, including OMB and Congress.
Rebaseline is a special case of a replan where the ABC is changed, and as a result, the MA and
PMB are usually also changed. P/ps are required to rebaseline when: (1) the estimated
development cost exceeds the ABC development cost by 30 percent or more (for projects over
$250 million, also that Congress has reauthorized the project); (2) the NASA Associate
Administrator judges that events external to the Agency make a rebaseline appropriate; or (3)
the NASA Associate Administrator judges that the P/p scope defined in the ABC has been
changed or the tightly coupled P/p has been interrupted.156 ABCs for P/ps are not rebaselined
to reflect cost or schedule growth that does not meet one or more of these criteria. When an
ABC is rebaselined, the Decision Authority directs that a review of the new baseline be
conducted by the Standing Review Board or as determined by the Decision Authority. After a
rebaseline, EVM performance measurements are reset to 1.0 and a re-validation process is
required for the changed content.
Figure 7-49 illustrates the flow of corrective actions and their impact on the schedule baseline.
156
NPR 7120.5 rebaseline requirements reflect the requirements described in the, “NASA Authorization Act of 2005, Section
103 of Public Law 109-155.”
322
Figure 7-49. Flowchart illustrates how updates to the schedule baseline and corrective actions impact the schedule baseline.
As mentioned in the above sections, there exist conditions by which a replan and/or rebaseline may be
directed by entities external to the P/p. The NASA Authorization Act of 2005, Section 103, “Baseline and
Cost Controls” for major P/ps (>$250M) specifies conditions under which a replan and/or rebaseline
must be performed. NPR 7120.5 has implemented the requirements specified in the NASA
Authorization Act of 2005 as follows. For P/p whose LCC is greater than $250M:
• If the Estimate at Completion (EAC) for the Development costs or the LCC exceeds the ABC by
15%, a replan is required.
• If the EAC for the Development costs or the LCC exceeds the ABC by 30%, a rebaseline is
required.
• If the estimated completion dates for the specified key schedule milestone(s) slips by more than
six months, a replan is required.
Figure 7-50 illustrates the decision and actions required by NPR 7120.5 for externally imposed replan
and/or rebaseline.
323
Collect Performance Quarterly
Project LCC Yes
• EAC for the Dev. Cost Congressional
Over $250M
• Estimated completion dates Notification,
Re-plan
No Yes
EAC Dev.
Yes Cost No Schedule
Not Externally cost
increase is slips by 6
Reported exceeds
over 15% months
ABC
No
No
Yes Cost Continue
No
increase is Quarterly
over 30% Reporting
Schedule Yes Congressional
slips by 6 Notification,
months Re-plan Yes
No Congressional
Re-baseline
Continue Quarterly Reporting Required
324
7.3.4.3 Decision 2. Retain or Replan?
Upon analyzing the performance data or gathering additional information from the watch action, the
PM may choose to retain the schedule, leaving it as is, or implement a replan
Decision. (1) Retain (do not change) the schedule, or (2) Replan (and possibly change the MA and/or
integrated performance baseline (or PMB), as appropriate). Document the corrective action or
retention rationale in the schedule baseline change log as part of the BoE.
Often no action is taken, and the poorly performing activity is unchanged. If no corrective action is
assigned, the PMB is unchanged. The following list is a set of potential reasons no action is assigned:
1. Other activities may be causing the poor performance and the corrective actions assigned to
them will resolve performance on the threatened activity. For example, an action assigned to
activity X will increase the float available to activity Y.
2. There may exist external activities that the P/p cannot control that are causing the poor
performance. For example, a component delivered from another P/p or from a foreign partner
may be delayed.
3. It may not be cost-effective to replan; continue to track the performance.
4. Funding availability may be limited and priorities for correction assigned to other activities.
5. Timing may be inopportune. For example, a major review is in progress which may uncover
additional performance issues and waiting until completion may provide consolidated correction
actions.
Taking no action usually exposes the P/p to additional risks which should be entered into the RMS, since
continuing on may threaten the MA and/or the ABC. Should this be the case, the P/p needs to review
the current performance and risks with its stakeholders to determine whether a replan or rebaseline
needs to be made instead.
A replan may be necessary for many different reasons, which may include, but are not limited to:
• To bring the schedule performance back into compliance with requirements
• To utilize schedule margin for risk mitigation activities or the realization of risks
• To address a situation whereby the time-phasing of the schedule shows overutilization of
available budget
• To incorporate a scope change affecting one or more subsystems,
• To address unplanned budget impacts through the PPBE process
Typical approaches for updating the schedule due to a replan are described in Procedure 5.
Decision. (1) Do not rebaseline and return to the performance measurement procedure, or (2)
Rebaseline and change the ABC and document with an update to the DM. Document the corrective
action in the schedule baseline change log as part of the BoE. Note: If/when a P/p undergoes a formal
rebaseline, the P/p may work to a “new” schedule baseline; however, the original schedule baseline
should be preserved for traceability and future planning.
326
7.3.4.7 Action 4. Rebaseline
Rebaselining is a special case of replanning that results from the need to change the P/p’s external
commitment, or ABC, in addition to the internal commitment, or MA. Rebaselining occurs when the
existing baseline is no longer achievable and measuring performance against it is of little or no practical
value. The need for a rebaseline may occur due to poor performance or other external factors, such as
budget cuts or other P/p launch priorities. It may also occur when a rebaseline is directed per the NASA
Authorization Act of 2005, Section 103. Partial rebaselines to selected portions of the baseline schedule
can be implemented for the same reasons and using the same process as a full P/p rebaseline.
Typical approaches for updating the schedule due to a rebaseline are described in Procedure 5.
157NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
Expiration Date: August 14, 2020. Page 37. https://nodis3.gsfc.nasa.gov/npg_img/N_PR_7120_005E_/N_PR_7120_005E_.pdf
327
Typical events can initiate a change to the schedule baseline. For instance:
• The annual PPBE guidance document may reduce the available budget which may cause either
deletion of content via the descope plan, or postponement of some activities until later fiscal
years. It is also possible that the PPBE guidance may increase budget and allow acceleration of
activities.
• The risk management process may identify mitigation plans that will need to be added.
• Planning errors may be discovered that require changes that impact cost and schedule and
therefore need to be processed through the P/p’s CM/DM process.
• The normal design process will often identify additional activities that need to be added.
• Component testing, EDU testing may uncover non-compliances requiring redesign activities to
be added.
• Rolling waves will need to be decomposed into lower-level activities.
• The performance measurement process may trigger corrective actions that will modify or add
activities.
328
2. The risk list is revised to reflect the changes.
3. Depending on the corrective action, a decision may need to be made to reset the schedule
performance baseline, or integrated PMB. Sometimes the progress against the original
schedule baseline can be so poor that the performance measurements are no longer useful.
Resetting the cost and schedule variances should only be done to improve the validity and
usefulness of the performance measurements, such as EVM performance data, to support
performance evaluations, future planning, or other management needs. For instance,
rebaselining the integrated performance baseline (or PMB) will reset EVMS Performance
Measures back to 1.0.
4. If the MA and ABC do not need revision, then the flow goes back to Procedure 3.
Schedule baseline changes resulting from the approval of the corrective action are implemented
through updates to the Schedule Database. If these changes are not documented in the BoE, reflected
accurately in the revised baseline, and subsequently controlled, then P/p future Schedule Performance
Measures may not be accurate and identification of the need for any additional corrective actions after
the replan will be compromised. The impacts could potentially result in either a lack of variance
reporting or additional, unnecessary variance reporting requirements, and more importantly, faulty
schedule information for use in management decision making. By properly maintaining the
configuration of the schedule baseline, P/ps will have a plan against which to measure performance and
understand variances that correspond to the work that is intended to be accomplished.
When retaining the schedule, several alternative methods exist for controlling the schedule baseline as
described in the following sections.
Depending on the type of corrective action taken, a variety of approaches may be utilized as part of the
Complete Schedule Baseline Method. Several of the workaround approaches detailed include schedule
acceleration and optimization techniques, as well as margin management techniques.
329
durations, interdependencies, and budget) as necessary within the approved P/p boundaries established
for scope, schedule, and budget. Careful coordination within the P/p between the responsible technical,
resource, and schedule personnel is necessary to preserve the integrity of the IMS. In-progress tasks are
typically not replanned unless there is significant work left to accomplish. In this scenario, it is
recommended that:
a. The in-progress task be redefined (split) such that the work accomplished to date becomes the
entire scope for that task with the remaining work scope becoming eligible for replan.
b. The original in-progress task is then recorded as complete. (check this next statement for
accuracy) This essentially is setting BCWS and BCWP equal to ACWP.
c. The remaining work scope, with the corresponding budget and duration, is then transferred to a
new task or tasks and replanned.
Change Activity Durations
Changing activity durations may be necessary when it becomes obvious that there is a consistent
indication of underestimation. In most cases, schedule durations are lengthened based on forecasted
performance measurements.
Increase Assigned Resources
Increasing assigned resources may be necessary when it becomes obvious that there is a consistent
indication of underestimation.
• Schedule Crashing. Schedule crashing is a schedule acceleration technique that adds extra
resources to perform the work in a shorter period of time. Note: This technique requires
additional money to add resources, which may be obtained through UFE.
Adjust the Calendar
Adjusting the calendar may be appropriate when longer work days or additional shifting is a possibility.
The PM can choose to spend more money per time unit to shorten the schedule. Note: This approach is
only possible if resources (people, facilities, and money) exist without conflicts to support the increased
work periods.
Add Activities
Adding activities may be necessary for the following reasons:
• Risk mitigation activities have been identified and approved
• A test or verification failure requires adding redesign and retest activities
• Engineering development unit swapped for flight unit to accommodate late delivery of flight
unit
• Change of scope or poor planning requires additional activities
The P/S will need to work with the Technical Leads to determine the baseline start and finish dates for
the new activities.
330
Delete Activities
Deleting activities may be necessary when the projected schedule delay is unacceptable and other
techniques for accelerating the schedule are not viable options. Descope options are identified in the
SMP and should be the last approach exercised in order to maintain completion dates and recover from
poor performance. Note: Descopes often require contractual changes.
Alter Workflows/Activity Sequencing
Altering workflows through modifications in activity sequencing is often done to reduce the impact of
poor performance.
• Fast Tracking. Fast tracking is a schedule acceleration technique where activities that would
have been performed sequentially are performed in parallel.
• Streamlining. Streamlining is a schedule optimization technique depends on the P/p team’s
ability to find a more efficient and effective approach to completing the work. It is often
dependent on a faster or simpler approach, requiring innovation and possibly including reuse or
eliminating non-value-added work. Note: With this method, the P/p has to weigh the level of
potential risk involved with each streamlining option.158
The P/S will need to work with the Technical Leads to determine the revised workflow.
Caveat. A disadvantage of the Baseline Control Milestone Method is that certain performance metrics,
such as the Baseline Execution Index (BEI) and Hit or Miss Index (HMI), will track performance against
the original schedule baseline dates at the detailed task level – since these tasks’ baseline dates will not
be changed. This will not be the case with the Complete Schedule Baseline Method. A workaround to
this limitation of the Baseline Control Milestone Method is to use it in conjunction with the Annual PPBE
Schedule Baseline Reset Method described later. This approach provides a method for keeping the
schedule baseline at the detailed level up to date for schedule performance monitoring and reporting
using BEI, HMI, critical milestones, and cumulative milestones reports.
158
PASEG, Version 4.0. National Defense Industrial Association (NDIA), Integrated Program Management Division (IPMD).
March 9, 2016. Page 115.
331
7.3.5.3 P/p Element Baseline Method
The P/p Element Baseline Method essentially tracks the individual revision levels of the P/p’ various
element IMSs (i.e., spacecraft, instrument/payload, ground system, etc.). Typically, the P/p will maintain
a set of major receivable and deliverable milestones between these various P/p elements. The element
work may consist of effort performed in-house, at industry contractors, other NASA centers, or
international partners. Typically, the P/p receives the “native” IMS files from these performing
organizations, but schedule baseline control is maintained by the provider organizations. The P/p’s key
responsibility from a schedule baseline control standpoint is to assure the provider organizations are
maintaining an adequate schedule baseline control process. Additionally, the P/p should document and
track the various revisions of the individual provider schedule baselines using a schedule baseline
change log.
Schedule margin is used to absorb the impact due to risks and uncertainties. The P/p can effectively
manage schedule margin by continually performing SRAs to understand and address potential risk
impacts to the P/p throughout its life cycle.
Once an SRA is run, the P/p should be able to determine whether the potential impacts of the
uncertainties and risks can be absorbed given the allotted margin. If the finish milestone estimate does
not move past the planned P/p completion date, then the P/p likely has appropriate margin given its
identified uncertainty/risk posture. However, rather than waiting for the risks impacts to occur,
proactive management might include adding risk mitigation activities to the schedule through the
approved use of margin. The P/p may be able to implement a process for this to be accomplished via
routine risk management meetings with PM approval.
Technical Leads are usually the first to recognize potential areas for risk mitigation. The P/S should work
with the Technical Leads to compare what is actually occurring on the P/p with the results of the SRA to
ensure the right risks are prioritized for mitigation. Assuming P/p management will plan to mitigate the
risks with the greatest impact first, the risk sensitivity chart can be used to help prioritize risk
management efforts. The “top risk” can be removed and the simulation rerun to understand the
effective impact generated from the risk. Risks should be backed out of the analysis one-at-a-time with
a new sensitivity chart generated for each iteration to reveal the subsequent top risk. It is important to
note that the order of top risks is dependent on the combination of risks applied to the simulation at any
given time (i.e., some risks have greater overall impact on a schedule when combined with a particular
set of other risks). Removing a risk may change the critical path, making other risks and activities more
“critical,” which is why it is necessary to rerun the tornado chart after each simulation. If the P/p has
identified some other order of risk mitigation due to funding, management priorities, or other
constraints, risks can be worked-off the simulation according to the PM’s plan. Running different “what
if” scenarios provides a good comparison of the impacts each mitigation approach has to “buy back”
schedule through working off the risks in different orders to achieve the greatest and most realistic
benefit.159
159The authors of this paper acknowledge that while risk tornado charts based on correlation coefficients are widely used and
supported by probabilistic risk analysis tools, recent research has shown that “ranking risks by correlation coefficients is not a
good sensitivity measure, especially for schedule.” (Kuo, Fred. “A Mathematical Approach for Cost and Schedule Risk
Attribution.” NASA 2014 Cost Symposium.) Another alternative for determining “optimal” schedule margin allocation is the
Ruhm-Mango-Kreps Algorithm. (Fussell, Louis. “Margin Allocation Using the Ruhm-Mango-Kreps Algorithm.” 2016 NASA Cost
Symposium.)
333
When mitigation activities are added to the schedule, margin activity durations, or some portion of
margin activity durations are converted to the risk mitigation activity or set of activities, and then linked
to reflect an appropriate logic flow within the schedule. The mitigation activities become part of the
P/p’s plan, and the margin is accordingly reduced in duration to maintain the P/p’s planned finish date.
It is important to note that the duration and cost of a mitigation activity may not be the same as the
duration and cost of the risk impact; in most cases, it should require less money to mitigate the risk
ahead of time than to recover from the risk impact(s), which is why P/ps should aim to mitigate risks,
when possible. Sometimes when a risk is mitigated, the impact is not completely reduced, or some level
of threat remains. This net risk after controls or mitigations have been put in place is referred to as
residual risk. If any residual risk exists that cannot be mitigated, it should be accounted for in the P/p’s
risk list and considered in future iterations of the SRA for the proper allocation of margin. There may be
enough information about the residual risk (usually an “accepted” or “transferred” risk) to capture
margin where the risk impact might occur, or the margin to cover the risk impact may need to be
included in the lump sum of margin held near the end of the schedule.
Technical Leads in addition to the Schedule Analyst, will also usually be the first to recognize the need to
use margin for risk impact purposes. Although SRA risk prioritization may show that a particular risk
may significantly and adversely affect the P/p’s critical path, sometimes risk mitigations are either too
costly or do not provide the P/p significant time savings. The P/p may anticipate a possible need for
technical rework but choose not to make changes until verifying the need for rework through testing. In
this case, the margin held after testing can be translated into new work to manage the impact of the
failed test. In other cases, the risks may result from external factors that are out of the P/p’s control,
such as an anticipated late delivery from a partner. These risks might be those that have already been
“accepted” by the P/p. Thus, the P/p may allow these risks to occur and use margin to handle the
impacts. Margin must be allocated to the associated activities in these instances so that status can be
tracked. “Margin”, however, is not a work activity and should never be statused.
Whether margin will be used for mitigation purposes or to manage risk impacts when risks are realized,
the P/p should follow pre-determined processes, outlined in the SMP, for approving the use of margin.
The P/p will need to determine the trigger point at which margin should be released to manage the risk.
It will be important for the P/p team to understand whether certain margin is held/managed by the
Program Office, the Project Management Office or PM, or the Technical Leads, and how it is “released”
from one level to the next if/when necessary. Waiting too late to use margin may result in an inefficient
use of the available time (i.e., the margin may not be as helpful as it could have been if released earlier).
The P/p management team will need to incorporate any scheduling changes, such as the addition of
mitigation activities or the transformation of margin into new work activities to address risk impacts.
Only as risk mitigation activities are identified and approved, should they be incorporated into the P/p’s
plan as specific tasks in the schedule and budget made available to perform the new work. For tracking
purposes, the margin task(s) and effective margin calculations should be reduced accordingly.
334
be statused, so it benefits the P/p to find a way to repurpose the margin and possibly start subsequent
tasks earlier.
Technical Leads need to be aware of what others on the P/p are doing. There may be opportunities that
the P/p can take advantage of by repurposing or redistributing the resources expected to be consumed
during the margin period. The P/p should actively analyze the schedule to determine whether any
subsequent activities can start earlier than planned (i.e., prior identification of early start dates may help
facilitate this analysis). If tasks can start as soon as their predecessors are finished, this can save the P/p
money by completing work sooner and perhaps gaining total slack later in the schedule to help with
potential performance issues or unknown risks. The P/p may also consider re-shifting and using the
“available resources” to offset delays in other areas of the schedule. This assumes that
reserves/workforce would have been available during the original margin task duration. If there are no
other areas of the schedule that need to effectively take advantage of the “available resources” to get
back on track, the P/p may consider performing value engineering or additional testing for certain
technical elements to enhance technical performance (e.g., reliability, supportability, maintainability,
survivability, etc.), or simply returning the margin duration to the P/p as float (i.e., an earlier end date) if
the risk associated with the margin task no longer exists.
If none of these are viable options, the margin may be able to be moved downstream in the P/p
workflow. While the calculation should be the same for incorporating a margin task at any point along
the critical path, it may not be feasible to move the entire duration margin task to the end of the
schedule. Since margin is not always entirely, or even partially fungible, how the margin translates from
one point in the schedule to a later point in the schedule may be dependent on how much total float is
on the primary critical path versus the secondary critical path; this will dictate whether the original
primary critical path remains as the primary critical path after the removal and subsequent reallocation
of margin, or whether the original secondary critical path becomes the new primary critical path. The
full margin duration that was available to a particular subsystem or instrument in the spacecraft
schedule may not translate one-day-for-one-day as it moves toward the end of the schedule. For
instance, if the margin activity had a larger duration than the difference in total float between the
primary and secondary critical path, the secondary critical path may now become the primary critical
path. Figure 7-51 shows how removing a margin activity from the original primary critical path of a
“time now” schedule with the intent of reallocating margin to what was the secondary critical path may
result in a smaller duration of effective margin on what is now the “new” critical path.
335
Original schedule with Margin 1 task of 10 days Critical Path: A->Margin-
>C->D->E: 40 days, 0 days
A (15 days) Margin 1 (10 days) float
Path: B->C->D->E: 35 days,
5 days float
C (10 days) D (3 days) E (2 days)
B (20 days)
Need Date
Final schedule with Margin 1 task of 10 days removed and downstream Margin 2 task of 5 days created Path: A->Margin->C->D->E:
35 days, 0 days float
Critical Path: B->C->D->E:
A (15 days) 40 days, 0 days float
Margin 2
C (10 days) D (3 days) E (2 days)
(5 days)
B (20 days) Need Date
Figure 7-51. An example of unused schedule margin being moved downstream in the schedule.
With the margin removed, it is obvious that the path with Activity B was the deterministic critical path,
pre-margin allocation. This suggests that Activity A might have been a new technology that was
“inherently riskier” (i.e., had greater uncertainty) than Activity B, indicating that it appeared on the
probabilistic critical path and necessitated some allocation of margin. However, the uncertainty did not
manifest itself and therefore the margin was not used. The path with Activity A was no longer critical
and management could redistribute the margin throughout the schedule. With careful analysis, it
became apparent that the schedule would only be able to recoup 5 days of margin “activity” due to the
next most-critical path having only 5 days of float to the P/p need date. In other words, the P/p only
ever had 5 days of effective margin because while there were 10 days of margin on the probabilistic
critical path, the pre-margin-allocation deterministic critical path had only 5 days of float to the P/p’s
need date. This simple example illustrates how critical path analysis on the deterministic and
probabilistic critical paths aids in determining how much margin can be adequately accommodated at
various points within the schedule, as well as how much effective margin the schedule is actually
carrying with respect to the P/p end/need date.
If the P/p encounters a situation where margin near the end of the schedule will not be needed to
address risks or uncertainties and reallocation of the margin at this point in single-flow activity is not
viable, then P/p management may need to explore more creative options. While not likely an option for
most Space Flight P/ps, some P/ps may be able to move up the finish milestone (i.e., deliver earlier,
launch earlier, etc.). Another option may be for P/p teams to focus on completing any outstanding
documentation (i.e., waivers, approvals, signatures). Still other P/ps may decide to allow personnel
some down time prior to launch activities, for example. Depending on how much margin is remaining
and the particular constraints on the P/p, the PM may be able to explore opportunities for utilizing
margin for purposes other than to address risks, such as adding back in previously descoped work, for
336
example. Again, this assumes that there were “available resources” associated with the margin duration
(even though margin has no specified scope or budget). P/p management would likely need to go
through an approval process to use management reserve (dollars) to add in new work.
If a P/p encounters low confidence level results prior to establishing and allocating margin (i.e., prior to
baselining the schedule), the indication might be that there is not enough float to accommodate the
needed margin to account for the potential uncertainty and risk impacts. The P/p should consider viable
options for modifying the preliminary schedule to regain appropriate total float, and ultimately establish
adequate margin. Should any “time now” (post-baseline) SRA results indicate a low confidence level or
the need for margin in excess of what is planned to meet a higher confidence level, the P/p may need to
consider similar schedule workarounds to “buy back” schedule margin.
Possibilities for replanning primary critical path elements (and near-critical path elements, as needed)
may include looking at restructuring workflows to include parallel paths or alternate logic, combining
tests, adding resources, expediting procurements by paying a premium, etc. The goal should be to
develop an achievable, risk-informed schedule that provides the needed margin. Schedule
workarounds, as described in Section 7.3.5.1, may take into account viable schedule compression
techniques for activities along the critical path, such as performing tasks in parallel (fast tracking),
adding resources (crashing), or utilizing other replanning methods to capture recovery or new technical
approaches. The P/p may be able to replan particular elements of the schedule to regain float (and
ultimately margin), while still working to the planned end item finish date or P/p completion date, or
rebaseline the entire schedule with a new P/p finish date to ensure a more realistic and achievable plan.
However, the PM will need to weigh the potential addition of new risks to the P/p through these
techniques. For example, performing work in parallel may add new risks to the P/p, whereas adding
resources could be expensive. Taking these risks and uncertainties into account in an SRA on the
hypothetical, “replanned” schedule and evaluating the resulting confidence level will help the PM to
have a good grasp on whether the benefits outweigh the risks.
Schedule Documentation and Communication ensure P/p schedule information is captured in such a
way that it is useful for the management of a P/p and aids in decision making at all levels of the P/p,
including stakeholder decisions. These two Schedule Management sub-functions, which are not only
reliant upon the Schedule Database and the CM/DM processes, but also the baselining activity discussed
in Chapter 7, facilitate subsequent management and control of content change. This rigor is necessary
to ensure the intended meaning of P/p schedule information is not only steadfastly conveyed but also
clearly received. Documentation and Communication are both an overarching linkage as well as a
common thread woven into each sub-function of Schedule Management as reflected in Figure 8-1.
Schedule Documentation and Communication are pillars of the overall Schedule Management function.
Documentation and Communication begin at the onset of Schedule Management and continue
throughout its life cycle. Well-defined tools, protocols, and formats for Schedule Management
documented in the Schedule Documentation and Communication Plan must be established early in the
P/p life cycle and developed in close coordination with the requirements of the SMP and Schedule
Database to ensure processes, metrics, and data input/output are captured and conveyed in a manner
that:
• Documents schedule information, including the IMS and BoE in a transparent and traceable
manner
• Supports the evaluation of the baseline, changes, decisions, and the resulting impacts on the P/p
and stakeholders
• Identifies, captures, and communicates impacts to multiple organizational units (i.e., cross-
cutting decisions) enabling the coordination of Schedule Management efforts
• Enables consistent and repeatable reporting by each organizational unit to the sponsoring
organization at the next higher level of the NASA hierarchy in a manner that allows the higher
level organization to integrate that information into its own assessment to make informed
decisions
• Ensures Schedule Management decisions and lessons learned, including their rationale, are
captured as part of the organizational and institutional knowledge
339
8.1 Best Practices
Figure 6-5 details the best practices for Schedule Maintenance and Control.
SM.MC.1 Schedule • The schedule is documented and communicated according to the Schedule
Documentation and Management Plan.
Communication Follows SMP
SM.DC.2 File Management • A systematic, structured, electronic file management system is used for
System is Used for CM/DM of configuration management/data management of schedule information.
Schedule Information
SM.DC.3 Strategies, Plans, • Schedule strategies, plans, and processes are routinely communicated with
and Processes are stakeholders.
Communicated to
Stakeholders
SM.DC.4 Interface Tools • Interface tools support the delivery/receipt of appropriate schedule
Support the Schedule information and data.
Information Delivery/Receipt
SM.DC.6 Plans/Products are • All required plans and products are at the appropriate maturity and
Documented and Retrievable documented and retrievable in preparation for LCRs.
for LCRs
SM.DC.7 Formal Findings • Formal findings, recommendations, and actions from key P/p and life cycle
from LCRs and KDPs are reviews (LCRs) and Key Decision Points (KDPs) are documented, with
Documented, with Progress progress against the recommendations and actions tracked, and products
Tracked and Products updated, as needed.
Updated
SM.DC.8 Reporting Formats • Schedule reporting formats and templates are developed, which best align
and Templates are Developed with and meet the needs of the P/p according to identified reporting
forums.
SM.DC.9 Schedule is Routinely • The P/p schedule is routinely backed up throughout the P/p lifecycle,
Backed Up starting even before the schedule is baselined.
SM.DC.10 Schedule • Schedule performance data, as well as assessment and analysis inputs,
Performance, Assessment, models, and results are routinely backed up for easy recovery, repeat
and Analysis Data is Routinely analysis, and development of trends.
Backed Up
SM.DC.11 IMS Versions are • The original baseline IMS, any significant replan IMSs, all rebaseline IMSs,
Archived and as-built IMSs, along with schedule versions at each major life cycle
review milestone are the minimum for P/p schedule archives.
SM.DC.12 Schedule Narrative • The narrative associated with the original baseline IMS any significant
is Archived replan IMSs, all rebaseline IMSs, and as-built IMSs, along with schedule
340
versions at each major life cycle review milestone are the included in P/p
schedule archives.
SM.DC.13 Final Schedule (and • A schedule (and cost) data package associated with the original baseline
Cost) Package is Documented IMS, any significant replan IMSs, all rebaseline IMSs, and as-built IMSs,
and Archived along with schedule versions at each major life cycle review milestone are
documented and archived.
SM.DC.14 Schedule Analysis is • Analysis input data, analysis models, and analysis results are formally
Archived archived throughout the P/p lifecycle for easy recovery, repeat analysis,
and development of trends.
SM.DC.15 Schedule Lessons • P/p schedule management lessons learned are routinely developed and
Learned are Routinely archived throughout the P/p life cycle, at a minimum at the end of each life
Developed and Archived cycle review milestone.
8.2 Prerequisites
Schedule Documentation and Communication can be initiated when:
• P/p stakeholders are identified and understood
• P/p has identified P/p scope, including objectives, needs, and constraints
• The P/p WBS has been developed
• A P/p CM/DM process exists
• An initial draft of the SMP has been developed, including Schedule Communication and
Documentation requirements
• For reporting, the Schedule Database is completely developed and populated with all initial (not
necessarily ‘baseline’) data
The Schedule Communication sub-function details how the P/p disseminates schedule information
among team members, as well as other stakeholders. Schedule Communication can be both written and
oral. Schedule Communication allows for the information captured by the Schedule Documentation
sub-function to be transferred in a useful format or variety of formats that aid in management and
decision making. Communication strategies and reporting templates should be established very early in
the P/p life cycle and structured to support various management briefings and decisions. Periodic
schedule reporting is oftentimes a large component of Schedule Communication and is generated from
the processes and procedures in Chapter 7.
341
Careful consideration should be taken to ensure P/p Schedule Documentation and Communication
requirements are consistent with overall Schedule Management requirements levied by NPR 7120.5,
NPR 7120.7, or NPR 7120.8, as appropriate, as well as the Schedule Management best practices defined
in this handbook.
8.3.1 Configuration Management and Data Management (CM/DM) for Schedule Management
It is a best practice for a systematic, structured, electronic file management system to be used for
configuration management/data management of schedule information. CM/DM is the PP&C function
that is responsible for providing control of documentation, data, and technical characteristics of both
configuration and non-configuration products for a P/p. As applied to programmatic work products,
CM/DM is responsible for providing visibility into and controlling changes to performance, functionality,
and physical characteristics and requirements. It is the backbone of Documentation and
Communication and facilitates the capture, archive, and dissemination of information throughout the
P/p life cycle, while maintaining the integrity and protection of data.
CM actually encompasses common naming conventions, data structure, consistent data coding, and
change control to enhance the collection and analysis of data statistics. DM includes the control of data
and information that is not usually identified within the configuration baseline and provides the control
and release of data generated throughout the P/p’s life cycle. Many proven CM/DM tactics involve
creative file naming conventions that capture identifying attributes or recent changes made, not just
version stamps. This allows for quick navigation through sets of similar files and enables straightforward
“time travel” back to previous points from which the P/S or Schedule Analyst can retroactively depart.
Regardless of the specific naming convention, file heredity should be apparent. An example used in
many ICSRA analyses is shown in Figure 8-3.
Schedule Management Planning. CM/DM helps to officially document the principles, processes,
and best practices for how the schedule will be managed throughout the P/p life cycle, including the
design, development, and implementation of all schedule management processes, tools, reporting
forms, and formats, which are included in the Schedule Management Plan and Schedule Control
Plan.
Schedule Development. CM/DM ensures the schedule is identified and documented in sufficient
detail to support the P/p life cycle. Properly maintained records that detail the P/p BoE, initial logic,
and major decision points of the P/p, as well as the thought process that went into creating the
baselined schedule flow, are valuable not only to the P/S and P/p itself but also to the planning of
future P/ps. Because the BoE is the backbone of the schedule data and information contained in the
Schedule Database, its minimum standard calls for a collection of information necessary for an
independent party to understand and reproduce the schedule estimate, according to the NASA Cost
Estimating Handbook.162 This can be accomplished by constructing an information packet (such as a
file folder) that contains three critical ingredients: a primary narrative file, constituent data and
information, and external references, all packaged together in a single location.
Schedule Assessment and Analysis. CM/DM applied over the life cycle of the schedule provides
visibility and control of its performance, functional, and physical attributes. Perhaps the most
important aspect of any schedule estimate’s evolution is comprehensive documentation during the
Assessment and Analysis sub-functions. SRAs and ICSRAs, which involve loading risk and/or cost into
the IMS or Analysis Schedule are inherently iterative and can defy even the best Schedule Analyst’s
attempts at tracking the analysis maturation. Over the course of a P/p, as the Schedule Analyst
modifies the preliminary or baseline models and creates what-if cases, an earnest configuration
control effort is necessary for avoiding confusion, minimizing and tracing errors, and capturing key
points of departure for P/p management and interested P/p staff (including the analyst him/herself
upon revisiting the material in the near or long term). Ensuring the consistent capture of the models
and associated data also facilitates the P/p’s ability to develop and understand trending over time.
Further, documentation of the SRA/ICSRA should always contain the actual model files themselves
along with the schedule, risk, and cost baselines.
As an example, to fully explain a cost loading method associated with an ICSRA estimate, a Schedule
Analyst may elect to maintain, alongside the models and baselines, an Excel workbook that
illuminates the keywords included in each model version’s file name within a change narrative,
outlines a schematic of the cost-to-schedule mapping strategy, and if possible, contains the
calculations themselves used to dissect the source cost estimate with the schedule in mind. The
162NASA Cost Estimating Handbook, Version 4.0. February 27, 2015. Appendix J. Page J-30.
https://www.nasa.gov/sites/default/files/files/CEH_Appj.pdf
343
documentation can then be used as a contextual map that keeps an estimate cohesive and coherent
throughout its initiation, maturation, and eventual finalization.
Schedule Maintenance and Control. Whereas change control addresses the management of the
P/p, CM/DM addresses the management of the products. CM/DM is the practice of handling
changes systematically so the schedule maintains its integrity over time, whether through routine
status updates, internal or external replanning, or more formal rebaselining efforts. CM/DM
implements the policies, procedures, techniques, and tools that evaluate and manage proposed
changes, track the status of changes, and maintain an inventory of schedule support documents as
the schedule changes. The CM/DM process facilitates orderly management of the schedule,
including schedule changes for such beneficial purposes as to revise workflow logic, improve
performance, or reduce risk. CM/DM allows P/p management to track requirements throughout
the life cycle through acceptance and operations and maintenance. As changes inevitably occur in
the requirements and design, they must be approved and documented, creating an accurate record
of the system status and resulting changes to schedule tasks, durations, and logical linkages.
Another important factor in CM/DM is records retention. P/ps are required to comply with NPR 1441.1
and NASA Record Retention Schedule (NRRS) 1441, which describe NASA’s records process and
retention schedules, respectively.163 Storage of the data needs to be compliant with the established
procedures, and the appropriate level of security safeguards needs to be maintained. While the
Configuration Manager is ultimately responsible for ensuring authoritative data is collected through a
single authoritative source, the P/S should work with the Configuration Manager and the PM to help
identify both the data required to be collected, as well as the authoritative source for each Schedule
Management data product. The Configuration Manager should inform the PM and the P/S when
authorized parties are provided access to Schedule Management data according to pre-determined
plans and agreements.
8.3.2 Reporting
Clear communications build credibility with P/p stakeholders.164 Schedule reporting is the dissemination
of meaningful information about the schedule’s overall status, progress to date, and forecast to
complete. Schedule reporting helps determine if the P/p’s objectives are being met. Various levels of
schedule reporting may be required depending on specific stakeholder needs. It is important to note
that all levels of schedule reporting should be provided from a single, integrated Schedule Database and
not from separate schedule sources. The basic assertion that all WBS elements must report to the same
depth or level of detail is not a valid assumption. Even at the P/p level, schedule reporting will vary
based on the level of PM interest in the elements contained in the P/p schedule. Higher volume (dollars
or hours) or critical/risk activities may require more granularity in detailed reporting, while lower
volume, non-critical/risk, or level-of-effort tasks may require only summary-level reporting. In today’s
163 NASA Records Management is applicable to the Schedule Management Function per NPR 1441.1, NASA Records
Management Program Requirements and NRRS 1441.1, NASA Records Retention Schedules.
https://www.nasa.gov/content/nasa-records-management
164 PMI. Practice Standard for Scheduling. Second Edition. Page 39.
344
P/ps, where resources are pushed to the limit, having appropriate flexibility in reporting requirements is
a valid approach. P/p schedule reporting and communication requirements are captured in the SMP.
Figure 8-4 is an example illustration of the Schedule Documentation and Communication Plan
components that should be established during Schedule Management Planning. The P/p must decide
which products flow in those channels and make plans to produce them. The following sections
recommend various types of communication strategies and schedule reporting formats that will prove
useful in managing a P/p.
165 PMI. PMBOK, Fifth Edition. ANSI/PMI 99-001-2013. 2013. Page 287.
345
Figure 8-4. The Schedule Documentation and Communication Plan must consider the different needs of all interested parties.
Contractor Reporting
To effectively integrate the contactor’s schedule data into the P/p IMS It is imperative that a clear
understanding exists between the government and contractors about the information necessary to be
included as part of the baseline schedule, as described in Section 7.3.1.1. This includes details such as
schedule content, level of detail, formats, reporting frequency, tools, thresholds, responsibilities, and
controls. The P/S should coordinate with the responsible COTR to develop the schedule management
and reporting requirements for applicable procurements. These requirements may be contained in the
The SOW, CDRL, and DRD must provide clear requirements for contractors in the areas of scope content,
deliverable expectations, and data requirements in order to avoid confusion during P/p implementation.
The SOW, CDRL, and DRD should be structured in order to take maximum advantage of contractors’
existing scheduling systems, capabilities, and formats, while still supporting NASA Schedule
Management best practices. The contract SOW should clearly delineate the work and deliverables that
are to be scheduled, the type of schedule products to be provided, the DRD to be followed, and any
special considerations required for carrying out the contracted work. The CDRL is a listing of the
technical information and reports required for a contract including submittal and approval criteria and
instruction. The IMS DRD is a document that provides specific requirements for schedule content, level
of detail, format, reporting frequency, applicable thresholds, and guidance for variance rationale. An
example DRD for schedule deliverables can be found on the SCoPe website.167
MOU, LOA, and any other binding agreement with an external partner must provide clear specifics
and/or guidance for the external partner in the areas of scope content, deliverable expectations, and
data requirements so that minimal confusion arises during P/p implementation. Additionally, these
documents must be structured in a manner that takes full advantage of their existing scheduling
systems, capabilities, and formats, while still aligning with NASA Schedule Management best practices.
The external partner agreement should contain guidance for schedule management that clearly
delineates what is to be scheduled, the type of schedules to be provided, the data requirements, and
any special considerations required for carrying out the agreed to scope of work. The agreement should
also document the specific P/p deliverables along with specific information on quantities, WBS
relationship, due dates, delivery location, means of delivery, and any other pertinent guidance needed
by the external partner. External partner agreements should also consider implications and impacts
Reporting should be performed consistent with the required products for a given P/p phase and the
associated expected maturity of those products. Section 2.3 includes a table of required Schedule
Management products throughout the P/p life cycle. For additional context on the expected maturity of
the IMS in a given phase, see Section 5.5.6, Figure 5-15. Throughout the P/p life cycle, assumptions and
status are routinely documented in the BoE for transparency and traceability of programmatic products
to P/p requirements, including any approved changes.
168 NASA/SP-2014-3705, NASA Space Flight Program and Project Management Handbook. Page 28, 35, 117.
348
For Research and Technology P/ps, PMs conduct internal P/p reviews as essential elements of
conducting, managing, evaluating, and approving P/ps. These reviews help to establish and manage the
progress against plans. These internal reviews are called Program Status Reviews (PSRs) for programs
and Periodic Project Reviews (PPRs) for projects. More detail on PSRs and PPRs can be found in NPR
7120.8.169
Per the Standard Operating Procedure Instruction (SOPI) 6.0 on the SRB Programmatic Process, “data
drops” are defined to ensure the P/p programmatic products are available to the SRB’s Programmatic
169 NPR 7120.8A. NASA Research and Technology Program and Project Management Requirements. Effective Date: September
14, 2018. Expiration Date: September 14, 2023.
170 NPR 7120.5E. NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012.
349
Team in sufficient time to perform the programmatic portion of the IA prior to the LCR.172 The Terms of
Reference (ToR) document defines the scheduling of and content required for the data drops and is
negotiated between the P/p, Program Office, and SRB prior to the initial data access milestone. To
ensure adequate time for the SRB Programmatic Team to become familiar with the P/p’s programmatic
products, three programmatic data drops are defined:
• Data Access: P/p provides access to required repositories for the LCR and overview
documentation (e.g., P/p Plan, WBS Dictionary, latest monthly status briefing) to assist the SRB
Programmatic Team in understanding the P/p prior to the beginning of the LCR
• Data Drop 1: P/p provides preliminary required programmatic LCR products
• Data Drop 2: P/p provides final required programmatic LCR products
Reporting the results of any analysis necessary to support an LCR/KDP will always be specific to the P/p’s
requirements for the analysis. Furthermore, reporting will vary depending on the audience. For
example, the P/p will want more information and details than an external reviewer or the stakeholders.
However, it is a recommended practice for the Schedule Analyst to document the analysis results using
the following outline. From that report, adjustments and extracts can be made for the different levels of
reporting.
• Requirements for the analysis and date
• Overview of the analysis plan
• Brief or references to the technical baseline
• Specification of the technical, cost, schedule and risk products used in the analysis
• Overview of the models
• Test and verification of the models, results of any peer reviews
• Baseline results
• Sensitivity studies
• Conclusions
• Recommendations
172OCFO-SID-0002. NASA Standard Operating Procedure Instruction (SOPI) 6.0. Release Date: May 23, 2017.
https://www.nasa.gov/sites/default/files/atoms/files/sopi_6.0_final.pdf
350
responsible for the tracking, disposition, and closure of the RFAs. The IA team is also responsible for
providing findings (strengths and weaknesses) and recommendations. The P/S will be involved in the
response to and closure of any RFAs, findings, or recommendations that impact Schedule Management
processes or the schedule baseline. An LCR is complete when the governing Decision Authority makes
his or her decision to authorize a P/p to continue down the life cycle.
In some instances, LCRs are followed by KDPs. KDPs conclude the LCR at the end of the life cycle phase
and serve as gates through which P/ps must pass to proceed to the next life cycle phase. The KDP
occurs once the P/p and IA team report out to the governing Program Management Council (PMC),
including the Decision Authority. The Decision Authority is the Agency individual who is responsible for
making the KDP determination on whether or how a P/p proceeds through the life cycle and for
authorizing the key P/p cost, schedule, and content parameters that govern the remaining life cycle
activities.173 The Decision Authority completes its assessment of the information presented during the
applicable PMC (e.g., presentations by the P/p, Program Office, Mission Directorate, and IA Team, etc.),
determines whether or how the P/p proceeds into the next phase and approves any caveats or
additional actions (including responsible parties and due dates). These decisions are summarized and
recorded in the Decision Memorandum (i.e., “Decision Memo”) signed at the conclusion of the
governing PMC by all parties with supporting responsibilities.174
The purpose of the Decision Memo is to ensure that major decisions and their basis are clearly
documented and become part of the retrievable records. Once signed, the Decision Memo is appended
to the P/p FAD or P/p Plan, as appropriate. The programmatic content of the Decision Memo depends
on whether the P/p is in Formulation or Implementation as follows:
• Decision Memo during Formulation. Documents key parameters related to work to be
accomplished during each phase of Formulation. It also documents a target LCC range (and
schedule range, if applicable) that the Decision Authority determines is reasonable to
accomplish the P/p. For projects at KDP B, a more refined LCC range is developed.
• Decision Memo during Implementation. Documents the parameters for the entire P/p life
cycle. At this point, the approved P/p LCC is no longer documented as a range but instead as a
single number. The LCC includes all costs, including all Unallocated Future Expenses (UFE) and
funded schedule margins, for development through prime mission operation to disposal,
excluding extended operations. The ABC, which forms the official P/p baseline, is established as
part of the KDP C approval and documented in the Decision Memo.
The Decision Memo also describes the constraints and parameters within which the Agency and the PM
will operate (i.e., costs, schedules, or MA – if applicable, and key deliverables), the extent to which
changes in plans may be made without additional approval, and any additional actions from the KDP.
The MA forms the foundation for P/p execution and performance measurements. The MA is typically
173 NASA/SP-2014-3705, NASA Space Flight Program and Project Management Handbook. Pages 9-11.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150000400.pdf
174 NASA Space Flight Program and Project Management Requirements. Effective Date: August 14, 2012. Expiration Date:
The potential outcomes at a KDP, as documented in the Decision Memo, include approval or disapproval
to enter the next P/p phase, with or without actions for follow-up activities as follows:
• Approval to enter the next program phase, with or without actions
• Approval to enter the next phase, pending resolution of actions
• Disapproval for continuation to the next phase. In such cases, follow-up actions may include:
o A request for more information and/or a follow-up review that addresses significant
deficiencies identified as part of the life cycle review preceding the KDP
o A request for a Termination Review
o Direction to continue in the current phase
o Redirection of the P/p
Working with P/p management, the P/S will use the decisions documented in the DM to facilitate any
necessary revisions to the schedule baseline to support the ongoing maintenance and control of the
schedule, as described in Section 7.3.
8.3.2.4 Report Types and Formats
It is a best practice for schedule reporting formats and templates to be developed, which best align
with and meet the needs of the P/p according to identified reporting forums. Schedule Documentation
should be easy to read and understand. Schedule Communication uses reporting formats and templates
pre-determined from the SMP in the dissemination of meaningful information about the schedule’s
overall status, progress to date, and forecast to complete. Schedule reporting helps determine if the
P/p’s objectives are being met by communicating information in a structured way to ensure clear
understanding by all stakeholders. It should also be noted the P/p schedule itself is a communication
tool as well as a catalyst for communication. The following sections discuss the three types of schedule
performance, as identified in Figure 8-5. These include: Status Reporting, Progress Reporting, and
Forecasting. Each type of performance reporting includes different report formats and templates that
will prove useful in managing a P/p schedule.
352
Figure 8-5. Schedule Performance Reporting can be broken down into three types: Status Reporting, Progress Reporting, and
Forecasting.
8.3.2.4.1 Status Reporting: Where the Schedule Now Stands (Actual Data)
Schedule status reporting describes where the schedule now stands (i.e., actual data). This category of
communication and reporting refers to a high-level snapshot in time and may include lists or graphical
views of actual activity/milestone dates or activity/milestone counts, for example. It does not include
any analysis of schedule metrics, trends, comparisons to previous versions, or forecast projections.
Examples of status reports are described below.
Timeline Report
A Timeline Report is a high-level communication tool, primarily used within Programs because they can
show a portfolio of project schedules according to a specified timeline. Timeline Reports are often
accompanied by project-specific narrative to provide additional insight regarding the status of each
project. Figure 8-6 and Figure 8-7 show examples of Program Timeline Reports.
353
Figure 8-6. A Timeline Report showing the status of a Program’s missions, including projects in development and operations.
Figure 8-7. A Timeline Report showing the status of a Program’s “current” missions.
354
Milestone Status Report
A Milestone Status Report focuses on significant events scheduled to occur at specific times in the P/p.
Such events could be the initiation or completion of a particularly important or critical activity,
equipment deliveries, reviews, or approval dates. A Milestone Status Report typically provides status on
key milestones as identified in the schedule baseline. However, at any given point in the P/p life cycle,
additional milestones from the schedule may be added to the report for better management insight.
Charts used for milestone status reporting are often simply a table of milestones completed and/or
milestones to-go illustrated in Figure 8-8, as a list shown in Figure 8-9, or as a Gantt chart-type capture
of milestones shown in Figure 8-11. Figure 8-11 shows a Program Milestone Status Report showing key
milestones for the projects that make up the Program.
Figure 8-8. A Milestone Status Report, composed of a table showing completed, current, and near-term milestones.
355
Figure 8-9. A Milestone Status Report composed of a list of completed and remaining milestones.
A Summary Schedule is composed of a Gantt chart, or horizontal bar chart, where the horizontal axis
represents the total time span of the P/p broken down into increments of time (e.g., days, weeks,
months, quarters, years, etc.), and horizontal bars of varying lengths represent the time span for each
summary activity, which are placed along the axis to show the timing of events (i.e., start dates, end
dates, and durations). Additional formatting of the bars may indicate summary activity progress. Key
milestones are typically represented by diamonds (or triangles). A vertical line placed on the Gantt chart
represents the report date. In its typical form, the Gantt chart does not show task dependencies, so it is
impossible to tell how one task slipping may affect another task. Thus, it is not considered a forecasting
report, but it instead shows a representation of the P/p’s current status and current plan forward.
During Pre-Phase A, when not much detail is known about the P/p scope, a Summary Schedule may only
include preliminary phase durations and estimated key milestone dates, as shown in Figure 8-12.
357
Figure 8-12. A Pre-Phase A Summary Schedule.
Once the P/p progresses past Pre-Phase A, the Summary Schedule should, at a minimum, identify all
WBS elements and reflect all contract and controlled milestones, major development phases (i.e.,
design, fabrication, integration, assembly, etc.), clearly identifiable schedule margin, critical path(s) (at
least primary, but recommended secondary and tertiary), and all end item deliveries. Figure 8-13 shows
an example of a typical project Summary Schedule.
358
Figure 8-13. A typical Summary Schedule report with a legend, an acronym list, and callouts to highlight significant schedule
slips.
As another example, Figure 8-14 shows a Summary Schedule with a separate “critical path margin”
summary row, which allows for quicker identification of the margin along the critical path. A legend
summarizing the total margin on each of the primary, secondary, and tertiary critical paths would also
be a helpful addition to this report.
359
Figure 8-14. A Summary Schedule report with an added “critical path margin” summary row for easier traceability.
When the Summary Schedule reflects a summary IMS for the complete P/p, it is often referred to as a
“Master Schedule”. Whether representative of the complete IMS or not, the amount of detail contained
in a Summary Schedule often depends on the level of insight the stakeholders need. While more
detailed information, as shown in Figure 8-15 and Figure 8-16, may be appropriate for the PM, a simpler
report, as shown above in Figure 8-13 may be more appropriate for senior management reviews.
Figure 8-15. A Summary Schedule with a legend to help identify key milestones symbols and color coding.
360
Figure 8-16. A Summary Schedule used as a management tool to track key information for all project elements. While a source
of immense information that may be helpful to the PM, it may be overwhelming for other stakeholders.
Summary Schedules may also be produced with a focus on a sub-project, subsystem, or instrument
element of the schedule when the P/S needs to communicate with Technical Leads, or when the
Technical Leads need to communicate with the PM, for example. A sub-project Summary Schedule is
shown in Figure 8-17.
361
Figure 8-17. A sub-project Summary Schedule, which represents progress against sub-project key elements and milestones.
A Summary Schedule does not provide enough schedule detail for thorough schedule analysis. Thus, it is
not intended for use as an analysis tool and is not an IMS; however, as it is a representation of the IMS,
it should be traceable to the IMS.
IMS Report
The IMS is the backbone for all Schedule Management reports, in that all schedule-related reports
should be traceable to the IMS. The IMS is also used to facilitate any assessments and analysis that will
aid in both routine Schedule Management, as well as senior-level management and PM decision making
through various reporting forums, such as Center Management Councils (CMCs), Quarterly Status
Reviews (QSRs), Monthly Status Reviews (MSRs), or other internal P/p reviews.
Project elements, such as subsystems, instruments, or sub-projects are typically required to deliver their
schedules on a monthly basis to support the routine maintenance of the P/p IMS. There are also
instances throughout the P/p life cycle, such as in preparation for LCRs, where the complete P/p IMS is a
necessary “report” or required data product for the associated stakeholders, such as an SRB. A
description of the IMS and its necessary contents for both Programs and projects is covered in Section
5.6.1. When the IMS serves as a reporting product, it should be provided in its native file format.
362
Health Check Report
A Health Check Report is a useful output of schedule health check tools. Health Check Reports facilitate
the Schedule Maintenance sub-function because they enable effective communication between the P/S
and the Technical Lead regarding activities and milestones that may require status updates. The
example in Figure 8-18 and Figure 8-19 illustrate health check reports generated from different tools
describing schedule status, such the number of activities with missing logic, hard constraints, negative
float, number of lags/leads, etc. Most of the other health check metrics described in Section 6.2.2.1.2
can be represented through these types of Health Check reports. The health check tools generally
provide a listing of the activities for each “count”, so they can be further examined and updated or
corrected, if necessary.
Figure 8-18. A Health Check Report generated from the NASA Schedule Test and Assessment Tool (STAT).
363
Figure 8-19. A Health Check assessment generated from the Deltek Acumen Fuse tool. A variety of Health Check Reports can be
generated using the dashboards and report formats available for illustrating both built-in and custom health check metrics.
Total Float
Unique ID Critical Milestones Need Dates (days)
5290 (Rec) Instr, L/V, S/C Component Mechanical ICDs 3/15/2006 0
5716 Solar Cell Vendor Contract Award 5/31/2006 0
4103 Propulsion Module Structure Award 6/30/2006 0
5682 (Del) Flt IM Strucutre to I&T 9/27/2007 0
3309 (Rec) Instrument Module Structure for I&T 9/27/2007 0
5704 Group 3 FLT Modules from Power Gropu 10/17/2007 0
5527 (Del) Solar Array to S/C I&T 1/31/2008 0
4097 Mandatory Launch Date 10/31/2008 0
5509 (Rec) Modules from Vendor (Power) 9/21/2007 1
5655 (Rec) Solar Array System 1/31/2008 1
4121 Avionics Module Structure Award 6/30/2006 2
4751 Instrument Module Ready for Orbiter I&T 11/5/2007 2
Figure 8-20. A Critical Milestones Report showing “critical” milestones according to total float calculations from the IMS.
364
well as the total float value column. Milestones with low levels of float, corresponding to the critical
path(s) can be color-coded for further visualization.
Total Float Report
Working
Days
Planned
Milestone Finish Nov '14
S/C - Propulsion Delivery to I&T 3/27/2015 0
S/C - PDDU Delivery to I&T 3/26/2015 1
S/C - SARA (SRC/TAG SAM) Del to I&T 8/5/2015 0
S/C - C&DH #1 Delivery to I&T 3/17/2015 8
S/C - GN&C Lidar Delivery to I&T 7/16/2015 7
S/C - SDST Delivery to I&T 3/31/2015 4
OVIRS Instr - Delivery to S/C I&T 8/12/2015 39
OCAMS Instr - Deliveyr to S/C I&T 8/17/2015 4
Ground System - Launch Readiness 6/30/2016 45
Figure 8-21. A Total Float Report illustrating the total float calculated against planned milestone finish dates.
Figure 8-22 shows an example of a Total Float Report in a graph format, which incorporates both float
and margin for key milestones and activities.
Figure 8-22. A Total Float Report illustrating both the total float and margin on key activities and milestones.
366
Figure 8-24. A Critical Path Report, showing the primary critical path filtered from the IMS according to calculated total float
values.
Critical Path Reports may include any number of critical and near-critical paths; however, it is a
recommended practice that the report reflects all paths with ten (10) workdays or less of total float, at a
minimum. Figure 8-25 shows an example of a Critical Path Report illustrating both the primary and
secondary critical paths.
367
Figure 8-25. A Critical Path Report, highlighting the primary critical and secondary critical paths using the Summary Schedule as
its foundation. This report also highlights the margin on each path.
368
Figure 8-26. A Resource Allocation Report showing over-allocated resources and the changes made to effectively “level” the
resources.
8.3.2.4.2 Progress Reporting: What has been Accomplished (Plan vs. Actual Data)
Even before the schedule has been baselined, the P/p can track performance against planned dates.
Progress reporting refers to any report generated from schedule data involving a comparison (e.g.,
variance) or trend. Once the P/p has been baselined, the P/p can measure performance against baseline
dates for performance purposes to generate performance metrics (e.g., EVM, see section 7.3.3).
Although these reports often project future activity/milestone dates, the comparisons, calculations,
and/or metrics are deterministic in nature, not accounting for the uncertainty and risk inherent in
forecasting; thus, these reports are captured in this Progress Reporting section instead of the following
Forecasting section. Examples of progress reports are described below.
369
Milestone Progress Report
A Milestone Progress Report is generally a list of milestones showing the baseline finish dates versus the
current planned finish dates, as shown in Figure 8-27. These reports help the P/p team to stay aware of
near-term milestones.
SPACECRAFT #1 Critical Milestones
Designator# MILESTONE SUBSYSTEM BASELINE FINISH CURRENT FINISH NEED DATE NEED - CURRENT
104 Gnd System SRR Gnd system 10/12/2010 10/12/2010 COMPLETE
106 Battery DCR Power 11/3/2010 11/3/2010 COMPLETE
107 Solar Array Contract Award Power 11/4/2010 11/30/2010 COMPLETE
101 FSW subsystem build 1.5 test complete Flight S/W 11/15/2010 11/15/2010 COMPLETE
109 Separation System DCR Mechanical 11/18/2010 11/18/2010 COMPLETE
108 FSW CDR Flight S/W 11/19/2010 11/19/2010 COMPLETE
110 Harness subsystem ETU#2 complete Harness 11/30/2010 11/30/2010 COMPLETE
103 RF Comm subsystem ETU complete RF Comm 11/30/2010 11/30/2010 COMPLETE
105 EPS subsystem ETU#2 complete Power 12/15/2010 12/15/2010 COMPLETE
114 EVD subsystem ETU#1 complete Engine Valve Drive 12/20/2010 4/7/2011 COMPLETE
115 Navigator subsystem ETU#3 complete Navigator 1/10/2011 5/12/2011 COMPLETE
110 Harness subsystem Prop Harness complete Harness 1/15/2011 2/21/2011 COMPLETE
113 C&DH subsystem ETU#2 complete C&DH 1/20/2011 4/6/2011 COMPLETE
116 EVD subsystem ETU#2 complete Engine Valve Drive 1/20/2011 6/6/2011 COMPLETE
112 Risk Reduction Deck to SwRI Mechanical 1/31/2011 2/25/2011 COMPLETE
138 Receive Fill & Drain Set#1 Propulsion 2/3/2011 3/4/2011 COMPLETE
124 Receive Filter Set#1 Propulsion 2/14/2011 3/22/2011 COMPLETE
111 Accelerometer CDR ACS 2/25/2011 2/25/2011 COMPLETE
127 Receive Axial Thruster Set#1 Propulsion 4/15/2011 6/1/2011 COMPLETE
119 Ground System PDR Gnd system 4/19/2011 6/7/2011 COMPLETE
118 Mechanical subsystem thrust tube QM (now TT #1) Mechanical 5/4/2011 7/29/2011 COMPLETE
125 Receive Pressure Transducer Set#1 Propulsion 5/11/2011 6/24/2011 COMPLETE
122 ACS(accelerometer) ETU complete ACS 5/19/2011 7/6/2011 COMPLETE
126 Receive Latch Valve Set#1 Propulsion 5/24/2011 7/15/2011 COMPLETE
128 Receive Radial Thruster Set#1 Propulsion 5/31/2011 6/2/2011 COMPLETE
123 Receive Flight USO Units 1&2 Navigator 6/3/2011 6/16/2011 COMPLETE
121 Receive IS & SC Qual Deck (now Deck #1) Mechanical 3/7/2011 8/3/2011 COMPLETE
146 FSW Build 2 (2.1) tested to I&t Flight S/W 8/22/2011 8/30/2011 COMPLETE
132 C&DH subsystem ETU#1 complete C&DH 3/7/2011 8/30/2011 COMPLETE
133 C&DH subsystem FM1 complete C&DH 7/29/2011 10/6/2011 10/6/2011 0d
129 Thrust Tube#1 to Prop#1 Mechanical 6/1/2011 9/19/2011 10/10/2011 15 d
130 EVD subsystem FM1 complete Engine Valve Drive 6/27/2011 9/20/2011 10/12/2011 16 d
131 PSE FM1 complete Power 6/13/2011 10/18/2011 10/18/2011 0d
121 Receive IS & SC Deck#1 (now Deck #2) Mechanical 4/19/2011 9/30/2011 10/28/2011 20 d
140 Instrument Deck#1 to SwRI Mechanical 9/13/2011 11/9/2011 11/10/2011 1d
136 Cleanroom Complete I&T 8/29/2011 10/17/2011 11/22/2011 (1) 26 d
137 Harness subsystem FM1 complete Harness 5/20/2011 10/11/2011 11/22/2011 (1) 31 d
139 Mechanical subsystem FM1 deck complete Mechanical 8/1/2011 11/9/2011 11/22/2011 (1) 9d
141 ACS(star sensor) FM1 complete ACS 9/16/2011 10/17/2011 12/1/2011 33 d
144 Receive Tank Set#1 Propulsion 7/6/2011 11/9/2011 12/22/2011 31 d
142 Avionics PSEES Box #1 to S/C#1 Power 10/4/2011 12/26/2011 12/26/2011 0d
145 Avionics C&DH Box#1 to S/C#1 C&DH 10/25/2011 1/4/2012 1/4/2012 0d
135 Thrust Tube#2 to Prop#2 Mechanical 7/28/2011 11/9/2011 1/11/2012 (2) 45 d
149 Optical Bench #1 to S/C#1 ACS 10/17/2011 10/26/2011 1/27/2012 65 d
147 Transponder #1 to S/C#1 RF Comm 7/26/2011 1/5/2012 2/7/2012 23 d
134 Mag Boom subsystem QM complete Magnetometer Boom 7/15/2011 12/15/2011 2/21/2012 48 d
143 Propulsion subsystem Qual Tank Propulsion 8/16/2011 1/26/2012 3/12/2012 32 d
152 Propulsion subsystem FM1 complete Propulsion 12/19/2011 2/23/2012 3/15/2012 15 d
153 ACS(accelerometer) FM1 complete ACS 11/22/2011 2/3/2012 3/16/2012 30 d
150 Navigator subsystem FM1 complete Navigator 12/9/2011 3/21/2012 3/26/2012 3d
154 Grounnd System CDR Gnd system 11/1/2011 2/7/2012 4/7/2012 42 d
151 Mini Stack Acoustic Mechanical 12/5/2011 4/20/2012 6/1/2012 (3) 30 d
120 RF Comm Protoflight complete (Omni Antenna) RF Comm 4/15/2011 2/16/2012 7/9/2012 102 d
155 Spacecraft #1 I&T complete * I&T 4/20/2012 6/1/2012 8/6/2012 46 d
148 RF Comm subsystem FM1 complete (Omni Ant.) RF Comm 10/5/2011 6/22/2012 12/3/2012 116 d
Baseline Finish = Early Finish Baselined 9/30/10
Need = Late Finish + Subsystem Reserve (Protects I&T Reserve) with exception of Milestone 155 COMPLETED NEED - CURRENT <20 days Overdue NEED
(1) Need date corrected to include slack.
(2) Reallocated some S/C #2 Reserve to Subsystems (3) Reallocated some S/C #3 Reserve to Subsystems
* Need = Late Finish + SC Reserve
Figure 8-27. A Milestone Progress Report illustrating milestone baseline vs. current planned finish dates.
370
Figure 8-28. A Milestone Variance Report illustrating milestone baseline vs. current planned finish dates, as well as the variance
(days) between the dates.
Figure 8-29. A Milestone Progress Report illustrating milestone baseline vs. current planned finish dates, as well as the
estimated “need date”.
The second type of Milestone Variance Report, shown in Figure 8-30, combines the milestone list with a
graphical display of the milestone variance. The graphical representation is organized similar to a Gantt
chart with one milestone per line, according the milestone list. Each milestone is located horizontally
along a time scale showing when it occurred/occurs along with any associated or expected slips (i.e.,
variances). Milestones differ from the bars in a Gantt chart in that they show only a single date and are
usually depicted as a triangle instead of a bar. Milestones can be shown in various colors or opaqueness
depicting the status of the milestone, and slips are typically shown using arrows.
371
Figure 8-30. A Gantt-style Milestone Variance Report illustrating milestone baseline vs. current planned finish dates.
A Milestone Variance Log, shown in Figure 8-31, is an associated report that provides additional
explanation of milestone status or slips. It is a recommended practice for Milestone Variance Log
information to be captured within the “Notes” field of the scheduling tool for any milestones that
experience changes from their planned or baselined dates.
372
Figure 8-31. A Milestone Variance Log documents the rationale for milestone finish date changes.
Variance explanations may also be captured directly on each Milestone Variance Report to provide
stakeholders with more complete information, as shown in Figure 8-32.
Figure 8-32. A Gantt-style Milestone Variance Report complete with variance explanations.
373
Milestone Trend Report
A Milestone Trend Report shows the trends of milestones completed on a month-to-month basis. The
Milestone Trend Report in Figure 8-33 shows both the count of the baseline milestones per month and
the actual milestones completed per month. It also shows the cumulative count of milestones over time
for both the baseline and actuals. This type of report can be used even before milestones have been
baselined, as variance can be shown against preliminary planned dates for P/ps early in their life cycles.
Figure 8-33. A Milestone Trend Report showing the planned number of milestones completed versus the actual number of
milestones completed over the course of several months, with projected milestone counts for upcoming months.
The Milestone Trend Report in Figure 8-34 illustrates how forecasted dates for key milestones have
shifted over time based on schedule performance. This figure can be generated using actual schedule
performance, as well as using risk-based techniques (e.g., SRAs).
374
Figure 8-34. Example of a Milestone Trend Report for key milestones showing a change in expected finish dates over time.
Figure 8-35. A Summary Schedule Variance Report adds columns to a basic Summary Schedule to show start date and finish
date variances in summary-level activities.
375
Float/Slack Trend Report
Float/Slack Trend Reports show the history of slack for a particular area or group of P/p activities, usually
with float/slack totals organized in a table format with an associated graph showing the float/slack
changes over time. This report may be used to show trends by WBS element, by milestone, by driving
paths for certain technical milestones, or by any other logical grouping. The report should show the
total slack for each reporting period. Reasons for total slack changes between reporting periods should
be noted. Figure 8-36 and Figure 8-37 illustrate typical Float Erosion Tables and Figure 8-38 illustrates a
typical Float Trend Report. Figure 8-39 shows another Float Trend Report with summary activities for
additional context as to when float degradation occurred during the P/p life cycle. See Figure 7-34 and
Figure 7-35 for additional examples and guidance on using these reports for Schedule Control.
Figure 8-36. A Float Erosion Table, showing the decrease in total float on key milestones over time.
Activity Comment Plan Date Need Date Forecast Date Plan Float Current Float Stoplight
C&DH None 6/1/20 7/1/20 6/15/20 30 16
Avionics Attitude controllers failed qual tests, design change pending 5/15/20 7/1/20 6/28/20 47 3
Structures
Primary Structure Late delivery of sidewall panels 6/1/20 6/15/20 6/17/20 14 -2
Secondary Structure None 6/20/20 7/1/20 6/25/20 11 6
Mechanisms Early delivery of hatches. 7/1/20 7/5/20 6/20/20 4 15
Pyros GUIDEP notice of bad lots, batch testing underway 7/1/20 7/10/20 7/15/20 9 -5
Thermal Control None 7/10/20 7/20/20 7/10/20 10 10
---- ---- ---- ---- ---- ----
---- ---- ---- ---- ---- ----
Figure 8-37. A Float Erosion Table, showing the decrease in total float on key activities over time, with planned dates, need
dates, and forecast dates, as well as a stoplight indicator.
376
Figure 8-38. A Float Trend Report with callouts identifying the reasons for float degradation over time for activities leading to a
specific milestone (e.g., Spacecraft Delivery).
Figure 8-39. A Float Trend Report highlighting planned schedule float versus actual schedule float over time in relation to key
activities and milestones.
377
7.3.3 as used for Schedule Control. Figure 8-40 shows a Margin Trend Report and the associated Margin
Change Log.
Figure 8-40. A Margin Trend Report and associated Margin Change Log captures the actual burndown of P/p schedule margin
over time in relation to the planned burndown.
Figure 8-41 and Figure 8-42 show typical Margin Trend Reports with minor differences in added
contextual information.
378
Figure 8-41. A Margin Trend Report, with an associated table showing planned versus actual margin days over time.
379
Figure 8-42. A Margin Trend Report with contextual callouts and major milestones, comparing the actual margin burndown to:
(1) the planned burndown for the P/p Management Agreement LRD, and (2) the planned burndown for the Agency Baseline
Commitment LRD.
As described in Section 7.3.3, understanding the difference between available margin days and other
non-working calendar days is helpful to the PM, especially in the later phases of the P/p. Figure 8-43
illustrates how a P/p can track schedule margin (and contingency) erosion (and restoration) over time
against the planned depletion.
380
Figure 8-43. A Margin Trend Report differentiating between working days (margin) and non-working days (contingency). A
report that captures both working and non-working days provides additional clarity for the PM to understand the total time
available for mitigating risks versus addressing performance issues.
EVM Reports
EVM metrics are widely used to communicate progress against baseline as well as changes in the
schedule progress month-to-month. Several examples of EVM Reports follow:
BEI: The BEI is the ratio of activities actually completed to activities planned to be completed. BEI is
described further in Section 7.3.3.1.3. Figure 7-18 illustrates a typical BEI report showing both the
current BEI calculation and the BEI trend over time. Figure 8-44 also illustrates a cumulative BEI report.
Figure 8-45 shows the BEI performance calculated on a month-by-month basis over the entire length of
a project. Figure 8-46 shows the BEI performance over a nine-month window.
381
Figure 8-44. An example of a cumulative BEI Trend Report.
Figure 8-45. An example of BEI Trend Report month-by-month over the entire length of a NASA project.
Figure 8-46. An example BEI Trend Report showing performance over a nine-month window.
382
CEI: The CEI reflects an index value, which is determined by dividing the number of tasks/milestones
that actually finished during the current reporting period by the number of tasks that were forecasted to
finish during the reporting period. CEI is described further in Section 7.3.3.1.3. Figure 7-20 illustrates a
typical CEI trend chart. Figure 8-47 shows the CEI performance calculated on a month-by-month basis
over the entire length of a project. Figure 8-48 shows the CEI performance over a nine-month window.
Figure 8-47. An example of CEI Trend Report month-by-month over the entire length of a NASA project.
Figure 8-48. An example CEI Trend Report showing performance over a nine-month window.
HMI: HMI measures the number of baseline tasks completed early or on time to the number of tasks
with a baseline finish within a given month. HMI is described further in Section 7.3.3.1.3. Figure 7-21
illustrates a typical HMI trend report. Figure 8-49 shows the HMI performance calculated on a month-
by-month basis over the entire length of a project. Figure 8-50 shows the HMI performance over a nine-
month window.
383
Figure 8-49. An example of HMI Trend Report month-by-month over the entire length of a NASA project.
Figure 8-50. An example HMI Trend Report showing performance over a nine-month window.
SPI: The measures the budgeted cost of work performed up to the “Time Now” date, divided by the
budgeted cost of work scheduled to be performed. It provides the ratio of P/p dollars earned-to-date
(work performed) to the dollars the P/p should have earned (work planned). SPI is described further in
Section 7.3.3.1.4 and shown in Figure 7-23.
SPIt: SPIt is the ratio of earned schedule to actual duration. This is determined by calculating the Earned
Schedule (ES), which is the time that equates the budgeted cost of work performed (BCWP) to the
budgeted cost of work scheduled (BCWS), and the dividing the ES by the actual P/p duration up to “Time
Now” (AT). It is analogous to the cost indicator for Cost Performance Index (CPI), as both are referenced
to “actuals”. It provides the ratio of the P/p time earned-to-date to the time the P/p should have
384
earned. SPIt is described further in Section 7.3.3.1.3. Figure 8-51 illustrates the SPI and SPI(t)
calculations.175
Figure 8-51. An EVM Trend Report showing EVM and Earned Schedule-based calculations for Schedule Variance (SV vs SVt) and
Schedule Performance Index (SPI vs. SPIt).
For P/ps that do not require EVM, similar reports can be generated can be generated using the planned
or schedule baseline dates.
175Drexler, J., T. Parkey and C. Blake. “Techniques for Assessing a Project’s Cost and Schedule Performance.” NASA Cost and
Schedule Symposium. 2017.
385
Figure 8-52. Example of a Risk-based Trend Chart for completion date used in a NASA project.
386
8.3.2.4.3 Forecast Reporting: Prediction of Future Status and Progress (Projections)
This category of communication and reporting refers to any report generated from a stochastic or
probabilistic modeling technique such as an SRA/ICSRA. This report also involves comparison, trend or
generates a metric that is risk and uncertainty informed including predictions utilizing EVM metrics.
Examples of forecast reports are described in the following sections.
387
Figure 8-55. A stochastic Critical Path Report generated from a probabilistic SRA.
Sensitivity Report
“Tornado” charts are typical outputs of Monte Carlo simulation tools and can illustrate sensitivity or
cruciality for activity durations, costs, and risks, as described in Section 6.3.2.5.2.3. Figure 8-56 shows an
example of a Risk Sensitivity Report used in risk prioritization.
SRA Reports
A CDF, or S-curve, can be used to communicate information about SRA/ICSRA simulation runs for
reporting as follows:
• Analysis of Alternatives Reports are usually generated early in the P/p life cycle as described in
Section 6.3.2.5.3.1 and illustrated in Figure 6-55
• Completion Range Reports are developed prior to baselining as described in Section 6.3.2.5.3.4
and illustrated in Figure 6-59.
• Margin Sufficiency Reports help to track and evaluate whether the P/p is carrying enough
margin throughout the lifecycle, as described in Sections 6.3.2.5.3.6 and 7.3.3.2.3, and
illustrated in Figure 6-63 and Figure 6-64.
• Trade Studies/Risk Sensitivity Analysis Reports can be generated as part of routine P/p
assessments or to support SRB assessments at LCRs, as described in Section 6.3.2.5.3.3 and
illustrated in Figure 6-57.
389
Figure 8-58. An illustration of the characteristics of a JCL Scatterplot.
390
Integrated Master Schedule Backups
It is a best practice for the P/p schedule and associated BoEs to be routinely backed up throughout the
P/p lifecycle, starting even before the schedule is baselined. Schedules should, at a minimum, be
backed up by the P/S on a monthly basis or prior to any major changes in the schedule (e.g., logic
sequence, task/milestone additions, and deletions). This practice ensures that a record of changes is
maintained for a number of beneficial reasons. Backup may include electronic and hard copies, but
electronic copies at a minimum are recommended. Backup files should be verified periodically and
stored in a secure location with controlled access.
Schedule Data Backups
It is a best practice for schedule performance data, as well as assessment and analysis inputs, models,
and results to be routinely backed up for easy recovery, repeat analysis, and development of trends.
Maintaining records that explain all changes in activity durations or logic as changes are made, or
especially at life cycle reviews can prove to be invaluable at later stages of the P/p. Activity log notes
are often used for the purpose of capturing logic when changes occur. Both activity log notes and
information captured at life cycle reviews will provide valuable data if it becomes necessary to
reconstruct what happened and why. Backed-up schedule data is essential for performing assessment,
analysis, and control processes, such as trending schedule performance over time and forecasting
completion dates. It is also invaluable in estimating efforts on future P/ps, providing information to
support the BoE for overall P/p schedule duration, specific task durations, task duration uncertainty, and
schedule margin.
391
capture significant workflow or duration changes, such as replans or rebaselines, represent key status
points. These iterations of the schedule are essential for estimating, trending, forecasting, and analysis
efforts on both current and future P/ps and should be archived within a master database or commonly-
accessible repository. Historical schedules can also be used for future P/ps by saving schedule files as
templates.
Archived schedules should be placed within a P/p schedule database, as well as in or commonly-
accessible repository, when appropriate. Formally archiving schedules in the Agency-level Schedule
Repository, described in Section 5.5.9, is required for all 7120.5 and 7120.8 P/p’s with LCCs of $50
Million dollars or greater. These P/ps are required to submit the IMS in its native scheduling tool
formats (e.g., MS Project .mpp files and/or Primavera P6 files). Specific Schedule Repository
requirements can be found on the Schedule Community of Practice (SCoPe) website.176 P/ps not subject
to the Schedule Repository requirement should maintain an archive of IMSs in the highest-level
repository appropriate as determined by their organization, whether it’s the Agency Schedule
Repository or a MD- or P/p-level schedule repository.
Historical Schedule Narrative Archives
It is a best practice for narrative associated with the original baseline IMS any significant replan IMSs,
all rebaseline IMSs, and as-built IMSs, along with schedule versions at each major life cycle review
milestone to be the included in P/p schedule archives. A historical narrative presents past P/p schedule
information in a meaningful format with content defined and documented for the complete life cycle.
The historical narrative should present a story of the P/p from beginning to end that highlights the major
events. It includes figures and diagrams that note significant changes between milestones. The
narrative should not capture every single detail but provide others with enough information to
determine how the P/p scope of work was accomplished and what implementation strategies were
used. Historical narrative is often captured as part of the CADRe, described in Section 5.5.9.177
Schedule Actuals Archives
It is a best practice for a schedule (and cost) data package associated with the original baseline IMS,
any significant replan IMSs, all rebaseline IMSs, and as-built IMSs, along with schedule versions at
each major life cycle review milestone to be documented and archived. Archiving schedule (and cost)
data packages facilitates the use of the P/p “actuals” data by others in the organization, and in some
cases across the Agency, which facilitates for more-informed planning at all levels of the Agency. For
example, “CADRe developers are able to quickly upload the documents and information and customized
reporting is available for analysts and users.”178
The intent of the archive is not to include every iteration of every assessment, analysis or model run,
rather the iterations at key events in the P/p lifecycle, as well as final products. All supporting data for
176 Agency Policy Guidance to Enhance Earned Value Management (EVM) and Create a Schedule Repository. June 4, 2019.
https://community.max.gov/display/NASA/Schedule+Community+of+Practice
177 CADRe/ONCE – Data Collection and Database.
https://www.nasa.gov/offices/ocfo/functions/models_tools/CADRe_ONCE.html
178 CADRe/ONCE – Data Collection and Database.
https://www.nasa.gov/offices/ocfo/functions/models_tools/CADRe_ONCE.html
392
the results should be archived. This information can include, but is not limited to: BoEs, the baseline
IMS, LCR IMSs, the as-built IMS, Analysis Schedules, routine (i.e., monthly, etc.) schedule metrics,
uncertainty, risks, parametric models, model assumptions, cost and schedule benchmarks, as well as
other specific communication/reporting documents discussed in Section 8.3.2.4. Detailed data may be
referred to, including direction regarding how to retrieve that data, if not part of the formal archive.
The following types of schedule-related reports should be considered a minimum set for archiving
purposes:
• Schedule Reports:
o IMS
o Summary Schedule
o Performance Trend Report (Tasks/Milestones Planned vs. Actuals)
o EVM Metrics, if applicable
o Critical Path Report
o Total Slack Report
o Schedule Margin Report
o Performance Trend Report
• Cost Reports:
o Performance Trend Reports (Budget vs. Actuals)
o EVM Metrics
o Over-budget Tasks
o Over-budget Resources
• Technical Reports:
o Technical Scope Changes
o Technical Parameters (Planned vs. Actuals)
o Risks/Issues Reports (e.g., Risk Waterfall (burn-down) charts, etc.)
Archived schedule data should be placed within a master database or commonly-accessible repository.
The P/S should coordinate with the Configuration Manager to ensure that Schedule Documentation files
are archived at the appropriate levels. For instance, while some P/ps may not be required to provide
schedule (and cost) data packages for archive purposes at the Agency level (e.g., CADRe, Schedule
Repository, EVM Repository), there may be a P/p-, MD-, or Center-level requirement for such data. The
P/p repository is typically maintained by the Configuration Manager (whether this be in the P/p business
offices, systems management offices, or P/p analysis offices), whereas a Center-level repository is
maintained by a specific point of contact at a given Center. At the Agency level, the CADRe, Schedule
Repository, and EVM Repository are maintained by the OCFO’s Strategic Investments Division (SID).
393
Schedule Analysis Archives
It is a best practice for analysis input data, analysis models, and analysis results to be formally
archived throughout the P/p lifecycle for easy recovery, repeat analysis, and development of trends.
Whether performed routinely or as part of an Agency-level requirement, analysis data often proves
invaluable in helping to tell the P/p’s “story”. For instance, analysis data can be trended over time,
which necessitates uncomplicated access to past analysis files. Figure 8-59 illustrates an example
archival structure.
Figure 8-59. Archival of the entire data set input, the models and the results is critically important to support repeat analyses
and examination of trends.
The top-level folder should contain a Review Plan and a ToR (Terms of Reference) if an SRB review. If it
is a major milestone, it may be appropriate to capture and archive any cases done specifically for the
SRB. Trial versions, preliminary versions and final versions should be clearly marked. A recommended
hierarchical structure follows:
1 Analysis/Review Name and Date. Contents: Analysis instructions and the final report (text
version and presentation version).
1.1 Analysis Versions: Trial, Preliminary, Final, Final Update, etc.
1.2 Read Me File: Describes content of folder. Specifies the Monte Carlo simulation tool used
and version number.
1.2.1 Input Data: Copies of P/p baseline data provided for the analysis.
1.2.2 Modeling Data/Worksheets: Spreadsheets or other formats with input parameters for
uncertainties, risks, and costs, etc.
394
1.2.3 SRA Model: The native SRA file that opens in the simulation tool or in the schedule
software with the statistical package add-in and includes all statistical distribution
parameters required to repeat the analysis if necessary.
1.2.4 Post-process Files. Excel Files with exported statistical data used to generate results/plots.
Must include case identification and description. If multiple files, label with case names.
1.2.5 Reports and Presentation Files. Interim files used for the various versions. Final version
goes in the top-level folder.
Effective P/p management, as well as schedule management, techniques should be carefully recorded
and replicated throughout the P/p life cycle, where practical. Just as carefully, ineffective practices
should be noted and avoided in the future. This includes information about a P/p's successes and
failures, including any risks that were effectively mitigated or realized, which can be used to aid in the
management of current P/p issues or training on similar future or recurring P/ps.
Schedule statistics are also often useful to future P/ps and may be included in a current P/p’s lessons
learned archive. Most of these statistics can be gleaned from routine data backups and change control
documents, if properly maintained. One approach in this area is to communicate with NASA’s Schedule
Community of Practice (SCoPe), in order to ascertain the types of statistics that would be most useful for
future P/p efforts. Preparation and dissemination of a summary document during closeout is often a
much more efficient use of time than later researching archives. Another good practice is to make a
“wish list” throughout the P/p life cycle, recording the types of schedule-related statistics that P/p
personnel would have found helpful had they been available. During the course of the P/p, part of the
effort should be directed toward, or at least cognizant of, accommodating this type of data collection at
specified intervals. Yet another good practice to efficiently and effectively capture lessons learned is to
follow the best practices mentioned elsewhere in this handbook. Specifically, the use of change control,
data structure, common naming conventions, and consistent data coding enhance the collection,
analysis, and future P/p use of schedule data statistics. As the P/p proceeds to completion, this
information facilitates final P/p team meetings, as well as administrative closure and contract closeout.
395
As is the case with the schedule data archives, P/p Schedule Management lessons learned should be
placed within the P/p’s schedule repository and an Agency-level repository, such as the Schedule
Repository and/or CADRe, as appropriate. This information will help in the collection of best practices
and P/p data that can be communicated through reports, white papers, conferences or other
distributions for SCoPe and broader PP&C knowledge sharing.179
8.4 Skills and Competencies Required for Schedule Documentation and Communication
The skills and competencies required for Schedule Documentation and Communication can be found on
the SCoPe website.180
9.1 Acronyms
AA Associate Administrator
ABC Agency Baseline Commitment
AC Actual Cost
ACWP Actual Cost of Work Performed
ADM Arrow Diagramming Method
AI&T Assembly, Integration, and Test
ALAP As Late As Possible
ANSI American National Standards Institute
AO Announcement of Opportunity
AoA Activity on Arrow
AON Activity on Node
APAC Agency Programmatic Analysis Capability
APPEL Academy of Program and Project Leadership
ASAP As Soon As Possible
AT Actual Time
ATP Authority to Proceed
BCR Baseline Change Request
179 The NASA Cost and Schedule Symposium website has links to white papers presented at the Symposium from individuals in
NASA’s cost, schedule, and EVM communities. https://www.nasa.gov/offices/ocfo/cost_symposium
180 SCoPe website, https://community.max.gov/x/9rjRYg
396
BCWP Budgeted Cost of Work Performed
BCWS Budgeted Cost of Work Scheduled
BEI Baseline Execution Index
BoE Basis of Estimate
BOM Bill of Materials
BP Best Practice
CA Control Account
CADRe Cost Analysis Data Requirement
CAIV Cost As an Independent Variable
CAM Control Account Manager
CBS Cost Breakdown Structure
CCB Configuration Control Board
CCM Critical Chain Method
CDF Cumulative Distribution Function
CDR Critical Design Review
CDRL Contract Data Requirements List
CEI Current Execution Index
CER Cost Estimating Relationship
CFOU Chief Financial Officer University
CLIN Contract Line Item Number
CMC Center Management Councils
CM/DM Configuration Management and Data Management
COTR Contracting Officer’s Technical Representative
COTS Commercial-Off-The-Shelf
CPFF Cost Plus Fixed Fee
CPI Cost Performance Index
CPLI Critical Path Length Index
CPM Critical Path Method
CPR Contractor Performance Report
CPTF Critical Path Total Float
CR Continuing Resolution
CV Coefficient of Variation
CWBS Contractor Work Breakdown Structure
DA Decision Authority
397
DDE Dynamic Data Exchange
DDT&E Design Development Test and Evaluation
DID Data Item Description
DM Decision Memorandum
DQI Data Quality Indicator
DR Data Requirements
DRD Data Requirements Description
EAC Estimate at Completion
EIA Electronic Industries Alliance
ES Earned Schedule
EV Earned Value
EVM Earned Value Management
EVMS Earned Value Management System
FAD Formulation Authorization Document
FAR Federal Acquisition Regulations
FFRDC Federally Funded Research and Development Center
FNET Finish No Earlier Than
FNLT Finish No Later Than
FRR Flight Readiness Review
FTE Full Time Equivalent
FY Fiscal Year
GAO Government Accountability Office
GFE Government Furnished Equipment
GFP Government Furnished Property
GPMC Governing Program Management Council
GR&A Ground Rules & Assumptions
GSFC Goddard Space Flight Center
HMI Hit or Miss Index
HOT Hands-On Training
HQ Headquarters
I&T Integration and Test
IA Independent Assessment
IBR Integrated Baseline Review
ICD Interface Control Document
398
ICSRA Integrated Cost Schedule Risk Analysis
IMP Integrated Master Plan
IMS Integrated Master Schedule
IPMD Integrated Program Management Division
IPMR Integrated Program Management Report
IPT Integrated Product/Project Team
IRT Independent Review Team
IT Information Technology
ITAR International Traffic in Arms Regulations
JCL Joint Confidence Level
JPL Jet Propulsion Laboratory
JSC Johnson Space Center
JSCC Joint Space Cost Council
KDP Key Decision Point
LCC Life Cycle Cost
LCR Life Cycle Review
LOA Letter of Agreement
LOE Level of Effort
LRD Launch Readiness Date
LxC Likelihood x Consequence
MA Management Agreement
MDAA Mission Directorate Associate Administrator
MDR Mission Definition Review
MFO Must Finish On
MOA Memorandum of Agreement
MOU Memorandum of Understanding
MR Management Reserve
MS Microsoft
MSFC Marshall Space Flight Center
MSO Must Start On
MSOD Mission Support Office Director
MSR Monthly Status Review
NAFCOM NASA Air Force Cost Model
NASA National Aeronautics and Space Administration
399
NAMS NASA Access Management System
NDIA National Defense Industrial Association
NICM NASA Instrument Cost Model
NID NASA Interim Directives
NOA New Obligation Authority
NPD NASA Policy Directive
NPR NASA Procedural Requirements
NSM NASA Structure Management
OBS Organization Breakdown Structure
OCE Office of the Chief Engineer
OCFO Office Chief Financial Officer
ODBC Open Database Connectivity
OIG Office of Inspector General
OJT On-the-Job Training
OMB Office of Management and Budget
ONCE One NASA Cost Engineering Database
OoE Office of Evaluation
ORR Operational Readiness Review
P/p Program/project
P/S Planner/Scheduler
PAR Program Assessment Review
PCA Project Commitment Agreement
P-CAM Project Control Account Managers
PCM Project Control Milestone
PDF Probability Density Function
PDM Precedence Diagramming Method
PDR Preliminary Design Review
PE Principal Engineer
PERT Program Evaluation Review Technique
PFA Program/project Formulation Agreement
PIR Program Implementation Review
PM Program/Project Manager
PMB Performance Measurement Baseline
PMBOK Guide to Project Management Body of Knowledge
400
PMC Program Management Council
PMI Project Management Institute
PMT Performance Measurement Techniques
PP&C Project Planning and Control
PPBE Program Planning Budget Execution
PPR Periodic Project Review
PRR Preliminary Requirements Review
PSR Program Status Review
QSR Quarterly Status Review
R&T Research and Technology
RBS Resource Breakdown Structure
Rec/Del Receivable/Deliverable
REDSTAR Resource Data Storage and Retrieval
RFA Request for Action
RFP Request for Proposal
RID Review Item Discrepancy
RMS Risk Management System
RP Recommended Practice
SATERN System for Administration, Training, and Educational Resources for NASA
SCoPe Schedule Community of Practice
SDR System Definition Review
SER Schedule Estimating Relationship
SID Strategic Investments Division
SIR System Integration Review
SMART Schedule Management and Relationship Tool
SMD Science Mission Directorate
SME Subject Matter Expert
SMH NASA Schedule Management Handbook
SMP Schedule Management Plan
SNET Start No Earlier Than
SNLT Start No Later Than
SOPI Standing Operating Procedure Instructions
SOW Statement of Work
SPG Strategic Planning Guidance
401
SPI Schedule Performance Index
SPIt Time-Based Schedule Performance Index
SRA Schedule Risk Analysis
SRB Standing Review Board
SRR System Requirements Review
SS Subsystem
SSI Schedule Sensitivity Index
STAT Schedule Test and Assessment Tool
SV Schedule Variance
TA Task Agreement
TBD To Be Determined
TBR To Be Resolved
TD Time Dependent
TF Total Float
TFCI Total Float Consumption Index
TI Time Independent
TSC Technical Schedule Cost
ToR Terms of Reference
UFE Unallocated Future Expense
UID Unique Identifiers
WAD Work Authorization Document
WBS Work Breakdown Structure
WP Work Package
WYE Work Year Equivalent
402