Forensic Schedule Analysis Methods
Forensic Schedule Analysis Methods
Forensic Schedule Analysis Methods
Different Results
John Livengood, Esq. AIA, FAACE, CCP, PSP, CFCC
Patrick Kelly, P.E. PSP
ABSTRACT
Perceived wisdom within the construction industry is that different Forensic Schedule Analysis
(FSA) methods produce different results for the same set of facts. Although there are many
potential variables that could cause this, such as bias of the analyst or the quality of the
implementation of a method, some experts have expressed concern that the methods themselves
generate different results and are therefore some may be potentially defective. But do the different
methods actually generate different answers when applied properly to the same set of facts, or are
the observed differences natural aspects of the methods that can be documented and quantified?
This paper will answer that question by examining a specific set of facts and applying each of the
four major Forensic Schedule Analysis methods – the As-Planned/As-Built, Contemporaneous
Period Analysis, Retrospective TIA, and Collapsed As-Built – to those facts. Further, since the
methods do generate different results, the paper will explain how and why that occurs, how to
quantify and reconcile the differences, and what conclusions a FSA expert should draw from those
differences.
however, RP 29R-03 breaks the methods into four major families: the As-Planned versus As-
Built (As-Planned vs. As-Built/MIP 3.2), the Contemporaneous Period Analysis (CPA/MIP 3.5,
sometimes commonly called the “Windows” method), the Retrospective Time Impact Analysis
(TIA/MIP 3.7), and the Collapsed As-Built (CAB/MIP 3.9). The further breakdown of the
families into the nine methods – the MIPs – is defined by factors such as timing of the analysis,
whether the model relies on active CPM calculations or not, whether the model adds or subtracts
fragmentary networks (“fragnets”) to simulate the effects of delays, or whether the analysis is
performed globally or in periodic steps. [6]
The term “Windows” is sometimes used as a term to describe a specific analysis method
developed in the 1980s [26]. However, in this paper, and throughout RP29R-03, “window”
refers to a slice of time in the life of a project, within which the analyst will use the selected
method to examine that window’s events. Most of the methods can be implemented in a way
that subdivides the project duration into time windows. The choice and definition of the periods
of time used to form the windows will be dependent on the circumstances. Usually, the start and
finish of the windows to coincide with the monthly progress update and pay application.
Occasionally the start and finish points for windows are identified to correspond with specific
delay events which are of interest to the analyst. Although this is potentially valuable, it is
inadvisable to have analysis windows which are wider than the period encompassed by the
progress updates. The monthly update (or pay application date, if no updates exist) should be the
maximum width of a window. [21]
CONTEMPORANEOUS UNDERSTANDING OF CRITICALITY
One of the more important differences between the forensic methods relates to how they treat the
project management team’s contemporaneous understanding of the critical path work. It is
important to understand whether the contractor and the owner used the schedules during the
project to establish their beliefs regarding which work was driving project completion and then
used that knowledge to plan the upcoming period’s work. What the project management team
knew is called its “contemporaneous understanding of criticality.” From the perspective of the
project management team that is properly using their prospective schedules for planning and
executing the next period of work, their understanding of what was critical to project completion
(and therefore the explanation of their actions at the time) is related to the status of the critical
path at the time in question. Even in the case where future events shift the final as-built critical
path away from an activity that was considered critical by the project management team at the
time, the understanding of their actions is possible only by understanding what they thought was
critical at the time.
A major difference in the analysis methods involves whether (and how) the methods incorporate
the contemporaneous understanding of criticality. Some methods rely heavily on the
contemporaneous view of criticality, while others determine criticality in a different way (such as
the determination or calculation of an “as-built critical path” which may or may not have a
relationship to the contemporaneous critical path). The authors and many commentators believe
FSA methodologies that reflect the contemporaneous understanding of criticality is preferred
because it better reflects the understanding and choices made in executing the project. [1] [20]
FIGURE 1 provides a general overview of the role of the contemporaneous view of criticality in
FSA methods; however, the specifics of that role will be discussed in more detail later.
Common Name MIP Reliant upon Notes
Contemporaneous
View of Criticality
As-Plans-Built (Single Step) 3.1 No * RP 29R-03 allows for
inclusion of the
contemporaneous view in
the definition of the As-Built
Critical Path.
As-Plans-Built (Multiple Step) 3.2 No * RP 29R-03 allows for
inclusion of the
contemporaneous view in
the definition of the As-Built
Critical Path.
Contemporaneous Period Analysis 3.3 Yes
Bifurcated Contemporaneous Period 3.4 Yes
Analysis
Recreated Contemporaneous Period 3.5 No
Analysis
Impacted As-Planned 3.6 No
Retrospective TIA 3.7 Yes * The inserted fragnets must
also have been
contemporaneously
understood to affect the
critical path.
Collapsed As-Built (Single Step) 3.8 No
Collapsed As-Built (Multiple Step) 3.9 No
combined with a reasonable belief that the project management team’s decisions were influenced
to some extent by that schedule series, should compel the analyst to use those schedules in the
FSA. The alternate option – creating a new schedule (or series of schedules) that were unknown
on the project but which now are expected to prove delays – disregards evidence and
perspectives that would help explain why things happened on a project. Second, the
contemporaneous understanding of criticality will identify correct interpretations of
contemporaneous periodic revisions to network activities, logic, and durations. For instance,
network revisions which were implemented in a particular update schedule that did not represent
changes to intended means and methods may not have affected the contemporaneous
understanding of criticality, since this indicates a detachment between the schedule development
and the actual construction.
Third, the fact of whether or not a contemporaneous understanding of criticality was reflected in
a schedule should be a factor in determining which FSA method is best for analyzing a given
project’s delays. For instance, if a project had update schedules created by a scheduler off-site
that were never reviewed by the project management team, it is probably not appropriate for a
forensic analyst to choose a method like the CPA/MIP 3.3, which relies on the monthly schedule
updates prepared on the project. Similarly, an analyst would likely be in error in selecting the As-
Planned vs. As-Built/MIP 3.2, which does not inherently consider the contemporaneous
understanding of criticality, to analyze a project that had a good series of schedules used by the
project manager and superintendent to plan and execute the project. In each of these examples,
there is a mismatch between the FSA method’s ability to reflect the contemporaneous
understanding of criticality and the project management team’s real-time consideration of
criticality in their actual monthly decisions. In addition to the well-recognized factors to consider
in selecting an appropriate FSA method, as discussed in Section 5 of RP29R-03, therefore, the
analyst should also consider how the schedules were used and whether they influenced the
decision-making process during execution.
Finally, the contemporaneous understanding of criticality will dictate when and how fragnets can
be inserted into a schedule during the implementation of a modeled analysis. The insertion of
fragnets which do not represent contemporaneous knowledge create a “theoretical” schedule not
used on the project. Furthermore, implementing an insertion when the fragnet is not consistent
with contemporaneous knowledge runs the risk of selectively modeling delays and creating an
imbalanced, partial analysis.
THE DIFFERENCES IN RESULTS
A common criticism of the major methods of examining schedule delay is that different methods
applied to the same set of facts yield different results. Several practitioners have previously
examined these criticisms. [24] Although there have been varied results from the studies, it is
generally accepted wisdom in the industry that the major methods return different results when
applied to the same set of facts. Yet none of these studies have offered any comprehensive
explanation as to why the different methodologies result in different results. This has created a
perception in some that some or all of the methods are inherently unreliable and therefore
invalid. This inability to explain why there are different results, and the tautological conclusion
that therefore some (or all) of the methodologies are flawed has exacerbated the “battle of the
scheduling experts” in various dispute resolution forums. These battles have sometimes created
the impression that FSA experts are frauds simply offering their biased opinions to their client.
[27] Such views do little to efficiently resolve disputes.
Many of the problems with reconciling the results of competing forensic schedule analyses stem
from issues unrelated to the accuracy of the methodology itself. These problems include, but are
not limited to:
• The incorrect selection of a method, which results in attempting to use an FSA
methodology poorly equipped to achieve the goal of the analysis.
• The poor implementation of a MIP, which both negatively effects the perception of the
methodology, and raises the issue of competency of the analyst: or,
• The use o f a schedule series that is unreliable, unverifiable, or otherwise not capable of
supporting a forensic analysis. Since many projects do not have properly maintained
schedule updates, it is inherently incorrect to try to use a FSA method that relies on such
updates.
While these factors are often the most common causes of problems with dispute resolution where
competing forensic schedule delay analyses are involved, the methods discussed in this paper are
not expressly designed to correct for these factors. Instead, the authors anticipate the four major
method groups being chiefly used when analyses are competently prepared by competing yet
experienced analysts, as to the existence, quantum, and responsibility of delays. That being the
case, we do also anticipate that aspects of these methods could be employed to identify a poor
analysis and to highlight its deficiencies.
Never identified in previous papers discussing the accuracy of various FSA methodologies [24]
is that the methods tend to analyze schedule models in different ways. The As-Planned vs. As-
Built, for instance, measures “what actually happened” by using hindsight to calculate the As-
Built Critical Path and measuring delays along this path. In contrast to this, the CPA measures
what the project team believed to be critical as of a given schedule’s data date, and the impact
that events had on the contemporaneous critical path. The shifting nature of the critical path is
well documented and understood, and the As-Built Critical Path and the contemporaneous
critical path may not be the same. The critical path shifts over time – sometimes between updates
– until it ultimately comes to rest on the final day of the project. Therefore, an analyst
performing an As-Planned vs. As-Built may determine that, for a given window, the project lost,
for instance, 23 CD due to activities on the As-Planned Critical Path, whereas the opposing
analyst performing a CPA would determine that during the same window, the project lost 30 CD
due to an activity on the contemporaneous critical path that does not ultimately appear on the As-
Built Critical Path. This fundamental disagreement between methods is common but not
insurmountable. In order to overcome the problems caused by the differences in the methods, the
authors recommend a common communication format: the cumulative delay graph. ”Cumulative
delay” is the number of days of delay that have accrued through a given point in time. In order to
generate a cumulative delay graph, one must plot the number of days of delay that an analysis
shows the project to have suffered as a function of each date during the project. The source and
the frequency of the data points for the cumulative delay graphs will vary slightly between
methods. Most notably, the cumulative delay graph for the As-Planned vs. As-Built should be
plotted as the Daily Delay Measure (DDM) graph. [16] This method enhancement provides for
identifying the quantum of delay at any given point in time by measuring the degree of lateness
of individual activities predicated on a comparison of their actual performance dates with their
late-planned dates. For the CPA, TIA, and CAB, the days of predicted delay should be plotted as
of the data date of the schedule at which the delay days are shown to have accrued. As will be
discussed further, the resulting graph can assist in identifying reasons for differences in specific
windows of the project, thereby facilitating resolution.
The cumulative delay graph is part of a larger reconciliation process between methodologies
because it allows a direct comparison of the quantum and timing of delay, albeit NOT the
responsibility for delay. For our comparison of the number and timing of delay days generated
for each methodology, we have undertaken the following seven steps:
1. The source data is validated as a prerequisite to method selection.
2. As part of the method selection process, [7] the project records are examined to
determine whether the contemporaneous view of criticality should be a primary
determining factor in deciding which method to use. As with all parts of the method
selection process, this decision should be supported with evidence.
3. The causal activity for a window must be identified. The causal activity should be
determined on as frequent a periodicity as the analysis method will allow.
4. The DDM line should be plotted. This line will serve as a baseline for comparison of all
the other analyses. The DDM will serve as the cumulative delay graph for the As-Planned
vs. As-Built analysis.
5. Each of the analyses is then plotted on a cumulative delay graphs. Each data point should
be the predicted completion date of the schedule as a function of that schedule’s data
date. We overlaid all the lines onto a single graph for easy comparison.
6. Each time-window of the project duration is reviewed, and the causal activities identified
by each analysis, and the amount of delay determined to have accumulated as a result of
that causal activity are noted. In the real-world, similarities in the causal activities and the
quantum of delay can foster agreement between the parties as to responsibility and assist
in resolution of delay related to that specific window.
7. Differences in either the causal identification or in quantum of delay were identified and
explained. The differences should be able to be explained as resulting from the
differences in the analysis methods themselves.
The purpose of this procedure is to first and foremost underline the fact that there are
documentable and quantifiable reasons why two competent analysts of the same project could
return different results. This will not, of course, resolve differences in opinion about the
underlying reason or responsibility as to why a causal activity was delayed. If both parties
identify the same activity and similar quanta of delay, but have different opinions about why that
specific activity was delayed and therefore apportion responsibility differently, this
reconciliation process will not help resolve that issue. However, if that is the case, then the
dispute is no longer about the schedule analyses and is instead properly concerned with the facts
of the case.
CREATION OF THE TEST SCHEDULE SERIES
The ability to reconcile the results of different methods hinges in part on an understanding of the
normal differences that will be exhibited by the cumulative delay graphs of each method. In
order to establish and analyze these differences, the authors created a test schedule series
consisting of a baseline schedule, 37 updates, an as-built schedule, and a collapsible as-built
schedule. We did this, rather than use an existing schedule series from a past project, to avoid as
many of the problems associated with poor scheduling practices as possible. Additionally, it
allowed us to control the update schedules and eliminate logic revisions between the updates.
While the test schedule is neither simple nor simplistic, it does provide known limits of variables
present in most real-world schedules.
The model Baseline schedule was based on a hypothetical bridge construction project, wherein
an existing bridge with two separate spans was being replaced, one span at a time, with active
traffic shifted to the other span. The proposed maintenance-of-traffic plan mandated that a single
span be open to two-way traffic during the construction; therefore, the general process for
construction involved switching all traffic to the existing span, demolishing the abandoned span,
construction of the new span, and switching all traffic to the new span. The second existing span
would then be demolished and the second new span constructed in its place. The model baseline
schedule contained over 432 activities, had a Notice to Proceed date of 1-Mar-2010, and a
predicted completion date of 7-Jun-2012 for an overall planned duration of 829 CD.
In order to create the test series of schedules for use in this analysis, the authors took a copy of
the model baseline schedule and created new durations which would represent the ultimate actual
durations of the activities. These durations were created based on a series of theoretical
productivity problems that a bridge project encountered. The new durations were input into the
copy of the model baseline schedule, and this schedule was recalculated as of the original Data
Date of 01-Mar-2010. The authors then created a total of 17 activities that represented delays that
occurred during this project. Five of these activities represented contractor-caused delays (such
as start delays or rework issues) while the remaining 12 activities represented owner delays.
These 17 activities were tied into the network of this schedule, with appropriate predecessors and
successors for the issue described by the delay activity. The schedule was recalculated, again as
of the original data date of 1-Mar-2010. The new predicted completion date of the schedule was
19-Apr-2013, or 316 calendar days (CD) after the baseline predicted completion date. This
schedule served two functions. First it was a detailed as-built schedule complete with actual start
and completion dates. Second it could function as a “Collapsible As-Built” with no dates
assigned to the actual start or finish columns, thus allowing the network logic calculations to
drive all the dates and float calculations. [9]
The Collapsible As-Built schedule was used to calculate the As-Built Critical Path (As-Built
Critical Path) of the project, and was also used in the performance of the Collapsed As-Built
analysis. To create the fully actualized As-Built schedule, the authors applied progress across the
entire project, thereby making the start and finish dates in the Collapsible As-Built schedule into
actual start and finish dates. This As-Built schedule had a data date of 1-May-2013. By this
process the authors were able to achieve a mathematically certain As-Built Critical Path (though
the authors acknowledge that this process would be difficult to implement in the “real world”).
To create the test series of 37 update schedules necessary for portions of this analysis, the
authors extracted the actual start and finish dates, and the actual durations, from the As-Built
schedule, and input them into a de-progression spreadsheet. This spreadsheet was designed to
allow the user to estimate a remaining duration of an activity at a given point in time. Therefore,
we were able to enter the desired data date of the first update schedule (in this case, 1-Apr-2010)
and the spreadsheet would return a list of activities that would have started and finished, as well
as a list of activities that only would have started. For these activities, the spreadsheet also gave a
remaining duration, based on an assumption of straight-line progress between actual start and
actual finish. The authors then copied the model Baseline schedule and imported the “actual
starts, actual finishes,” and remaining durations for the activities that would have seen progress
during the update window. The schedule was then recalculated as of the new data date, and the
predicted completion date was recorded. This process was repeated for each of the 37 months for
which the project was in progress. Through this means, a complete set of updates, reflecting both
progress and logic changes (reflected in the delay activities) was created, mimicking a real
project.
The schedule series was also created with a “weather exclusion period” that was simply a non-
work period in the calendar assigned to asphalt work. According to the calendar, no asphalt work
could occur between the start of the third week in December and the end of the second week in
March. Any asphalt activities that were pushed into this non-work period would immediately
jump forward three months, when the weather would presumably be warm enough to place
asphalt. This is a common technique in construction schedules to represent periods during which
no work can be performed on a type of work for a specified period, and it has a magnifying
effect on delays.
For instance, assume that in the test schedule update for June 2011, an asphalt activity is shown
as completing in early December. Lack of progress in the window (June to July) creates a three
week delay that pushes that asphalt activity into the weather exclusion period. Because the
calendar with the weather exclusion period will not allow the asphalt work to start until mid-
March, the three week delay that occurred in June has now become a three month delay. This is
also a common source of dispute in apportionment of delays in a forensic analysis, since in many
cases there are multiple parties responsible for the delays leading up to the point where the
weather exclusion period is affecting the predicted completion date . As that is the case, disputes
often arise over who is assigned the magnified delay that occurs when the schedule’s predicted
completion date jumps across the wide non-work period. As will be seen, this “weather
exclusion period” can also account for differences in both quantum, timing and allocation of
responsibility.
The 39 test series schedules that were originally created represented the contemporaneous
updates that the analyst would receive as the project record schedules. These schedules were then
copied (as necessary) and used to implement the four methodological analyses. Clearly, the four
methods require different schedules for performance: the As-Planned vs. As-Built requires only
the baseline schedule and the as-built; the CAB requires the collapsible as-built schedule; the
CPA requires all the schedules as they existed during the project; and the TIA requires all the
schedules as well as the fragnets for insertion into the schedules.
CREATION OF THE CUMULATIVE DELAY GRAPHIC
The combined cumulative delay graph is shown in FIGURE 2. The black line represents the As-
Planned v. As-Built DDM line, generated from the comparison of the as-planned dates in the
baseline to the actual dates in the as-built. The cumulative delay graph for each method was
developed by calculating the predicted completion date for each schedule in the analysis
method’s series of schedules, and plotting the delay predicted completion date as of the data date
of the schedule within which it was calculated.
Generally, it is clear that the cumulative delay graph for the Collapsed As-Built (MIP 3.9), when
only one party’s delays are removed from the as-built (solid green line) diverges the most from
the other three analyses. However, when both parties’ delays are removed from the collapsible
as-built (dotted green line) the Collapsed As-Built returns results similar to the other methods.
The As-Planned vs. As-Built (MIP 3.2) DDM line (in black), the CPA (MIP 3.3) line (in blue),
and the TIA (MIP 3.7) line (in orange) run along a largely similar path between March 2010 and
December 2011; after this point, the CPA/MIP 3.3 line and the TIA/MIP 3.7 line both drop
precipitously, whereas the As-Planned vs. As-Built/MIP 3.2 DDM line continues along roughly
the same slope as before this point. Analysts seeking to reconcile the differences between
methods must understand the causes and implications of these differences, and how it relates to
the specific way the method analyzes the CPM schedule and measures delay.
Note that the authors have calculated the slope of the cumulative delay lines in units of calendar
days per month (CD/Mo). Since a project cannot experience more delay in a month than the
duration of that month (in absence of an inserted fragnet) the maximum natural slope of an
unedited network will not exceed roughly 30 CD/Mo. Any time periods with slopes greater than
the maximum natural slope result from edited networks.
Delay due to
Delay due to Delay due to Concurrent
MIP Method Weather Total
Owner Contractor Delay
Period
As-
Planned
3.2 -164 -152 - - -316
vs. As-
Built
3.3 CPA -114 -120 -6 -76 -316
3.7 TIA -165 -40 - -111 -316
3.9 CAB -31 -285 - - -316
The update schedules created for the test series were used to develop the cumulative delay graph.
When owner delay activities start during the update period, they were shown with their Actual
Start date and a Remaining Duration proportional to the Original Duration, assuming straight-
line progress across the activity. As shown in FIGURE 5, they were not used to forward-project
the entirety of the delay, as compared with TIA/MIP 3.7. (Compare FIGURE 8).
Note that at the beginning of this elongated window, on 01-July-2010, the CPA shows that the
update schedule shows a predicted delay of 47 CD, whereas the DDM shows that the project
actually experience 44 CD of delay. The delay prediction remains consistent with the actual
delay experienced in August 2010 as well; however, on September 1, 2010, the contemporaneous
update shows a predicted completion date that is 82 CD later than the CCD, whereas on the same
date, the DDM shows that the project had only experienced 51 CD of delay. This difference of
31 CD of delay is superficially a significant difference in the results of the two analyses.
Referring to the schedules, however, provides an explanation.
Reviewing the As-Built Schedule in FIGURE 7, one can see that the project was progressing
through excavation and construction of Pier 7 of the northbound bridge lanes, and upon
completion of that work, the excavation of Pier 6 began. In late August 2010, a differing site
condition was discovered at Pier 6 (represented by Activity WD0020, and highlighted in orange).
This delay fragnet impacted the start of the as-planned Activity #P1P60120 “F/R/P Footing Pier
6, NB.” In the September Update, with a Data Date of 01-Sep-2010, the impacted activity was
shown with a predicted start delay due to the fragnet, prospectively inserted to represent the
estimated impact that the differing site condition would have on the activity (and by extension on
the predicted completion date). This fragnet insertion caused the jump in the MIP 3.3/CPA
cumulative delay graph from -48 CD to -82 CD (see FIGURE 6) as of 01-Sep-2010.
The As-Built Schedule in FIGURE 7 reflects the same activities on the As-Built Critical Path
during the same time frame; however, an As-Planned vs. As-Built is not tracking a predicted
delay – it tracks when the delay actually happened (at least insofar as the level of granularity of
the DDM calculation will allow). In this case, the impacted activity does not start until late
September. As such, the delay is not recorded as of 01-Sep-2010 data date of the
contemporaneous update – it is recorded as of 29-Sep-2010, when Activity #P1P60120 actually
started. As previously discussed, the DDM calculations can be performed daily, weekly, or
monthly. In this example, the authors have performed monthly calculations to match the window
size of the contemporaneous updates. As a result, the delay (from -51 CD on 01-Sep-2010 to -77
CD on 01-Oct-2010, see FIGURE 6 becomes apparent on 01-Oct 2010. Note that the magnitude
of the predicted delay (82 CD – 48 CD = 34 CD of delay) is greater than the delay actually seen
in the As-Built (77 CD – 51 CD = 26 CD). Differences in magnitude could be related to the size
of the predicted duration of the fragnet compared to its actual duration, or to the effects of float
consumption. Regardless, the salient point is that the DDM does not look forward, and therefore
the delay accrued over the window until the point where the two analyses are largely in
agreement.
In this test series, the two analyses identify the same activities as the cause of delay during this
several month window. The cumulative delay graphic therefore primarily assists in this window
with quantification of delay associated with the specific events. However, in the event that the
As-Planned vs. As-Built/MIP 3.2 DDM and the CPA/MIP 3.3 determined that the As-Built
Critical Path was different than the contemporaneous critical path during this window, the
discussion between the parties should shift to whether the CPA/MIP 3.3 is an appropriate method
for the analysis.
For example, assume that a CPA/MIP 3.3 shows that a given activity was, as of the data date of a
particular update, predicted to cause 15 CD of delay, and that the analyst performing the
CPA/MIP 3.3 asserts that the predicted delay is proof of entitlement to an excusable and
compensable time extension. Meanwhile, the As-Planned vs. As-Built/MIP 3.2 DDM for that
same window shows that a different activity drives the As-Built Critical Path for that same
window and caused 17 CD of delay. The quanta are roughly the same; however, the cause of
delay is in dispute. The issue again becomes whether the schedule series affected the
contemporaneous understanding of criticality.
The analyst performing the As-Planned vs. As-Built has the benefit of demonstrating what
actually delayed the project but the CPA/MIP 3.3 methodology might be showing the
understanding of the managers at the time of the update and their anticipated delay. As
previously discussed, the contemporaneous view of criticality is preferred, if it can be proven.
The analyst performing the CPA/MIP 3.3 cannot simply state that the prediction showed a delay
would occur; he or she should also show that the prediction affected the project management
team’s actions in some way (such as shifting resources to the activity perceived to be critical, or
planning for accelerated work in the future). If the predicted delay existed only on the
scheduler’s software and never influenced the project management team’s actions, then the
contemporaneous understanding of criticality was not affected and the predicted delay is
meaningless. In this case, the authors believe that the As-Planned vs. As-Built/MIP 3.2 DDM
line and its associated as-built critical path causal activity are more appropriate to determine the
delay for the window.
In January 2012 the CPA/MIP 3.3 line drops from a predicted delay of 185 CD to almost 300 CD
of delay. This is due to the effects of the previously discussed weather exclusion period. This
sudden drop of 115 CD is, again, a predicted delay resulting from the effects of the weather
period. Note, however, that the As-Planned vs. As-Built/MIP3.2 DDM continues to trend
steadily downward at an average slope of approximately 8 CD/ Mo. In practical terms, the
analyst performing the As-Planned vs. As-Built/MIP 3.2 and relying on the DDM would say that
the effects of the weather exclusion period were irrelevant: the contractor had been incapable of
maintaining schedule prior to 1-Jan-2012, and the work excluded by the non-work period would
not have been available for execution any earlier. In the As-Planned vs. As-Built/MIP 3.2
analysis, then, the 115 CD were a result of the contractor’s poor progress on bridge foundation
and superstructure work, unrelated to the weather exclusion. In contrast, the analyst performing
the CPA/MIP 3.3 would argue that owner delay activities (including differing site conditions,
design changes, etc.) pushed the work into the weather exclusion period and that therefore the
115 CD were the responsibility of the owner.
In answer to the other party’s charges that poor progress was the cause, as identified in the As-
Planned vs. As-Built/MIP 3.2 methodology, the contractor could mount a defense of “pacing” of
work. In other words, the contractor would allege that given his or her knowledge of the future
delay brought about by the weather exclusion (linked with his contemporaneous analysis that
attributed this delay to the owner), the contractor deliberately slowed production on available
work so that it would be complete only just in time for the early start of the weather-affected
work. Once again, however, this is an argument that rests heavily with the contractor’s
contemporaneous understanding of criticality. In order for this pacing argument to be legitimate,
the contractor would need to show that he had this understanding of the weather delay as of 1-
Jan-2012, and that he or she took actions to slow the production. Without this demonstration, it
will be difficult for the owner to accept that the production delays before 1-Jan-2012, were the
result of the contractor’s poor productivity, whereas the production delays after1- Jan-2012, were
the result of deliberate pacing. The cumulative delay graph highlights the need for proof of this
contemporaneous understanding of criticality.
Therefore, for the purposes of establishing that the CPA/MIP 3.3 graph is the appropriate
measurement tool and that it should supersede the other method’s graph for a given period, the
analyst performing the CPA/MIP 3.3 should establish the following: [25]
1. The analyst must confirm that the means and methods were accurately represented in the
contemporaneous update.
2. The analyst must confirm that the schedule was used to plan and execute the project, and
that the results of the CPM calculation influenced the contemporaneous understanding of
criticality.
The analyst would conceivably accomplish this through review of project documentation such as
meeting minutes, daily reports, and correspondence. This backup information would be essential,
however, to justifying the use of a specific method’s cumulative delay graph and associated
causal activities.
When plotted on a cumulative delay graph in FIGURE 8, the TIA/MIP 3.7 line tends to lead the
CPA/MIP 3.2 DDM line in a manner similar, yet more pronounced, than did the CPA/MIP 3.3
line. The TIA/MIP 3.7 line is an average of approximately 13 CD earlier in this test model.
Again, this lead is related to the fact that the TIA/MIP 3.7 is predicting delay rather than
measuring actual delay; however, in contrast to CPA/MIP 3.3, the TIA/MIP 3.7 is predicting
delay in inserted fragnets as well as in the original CPM network. To better understand the
differences between the two, refer to FIGURE 9 below.
The TIA leads the CPA/MIP 3.3 line by an average of roughly 7 CD. In addition, note that in
FIGURE 3, the number of days assigned to the contractor (40 CD of delay) is much lower than
in the other methods. In the CPA/MIP 3.3 analysis, the apportioned delay between owner and
contractor was 51% to 49%; in TIA/MIP 3.7 TIA/MIP 3.7, the apportioned delay split was 71%
to 17%. The contractor tends to receive a lower apportionment of delay days in methods that
forward-project delays associated only with the owner. In other words, if the fragnets inserted
into a TIA are always representative of the other party’s alleged delays, then the analysis will
tend to show that the other party is responsible for most of the delays. For this reason, it is not
good practice to only model one party’s delays. In reality, delays caused by the contractor are
often not known until they occur, while delays caused by the owner, which are often changes in
scope, are usually known at least a month prior to their actual occurrence. However, there are
conceivably occasions when such an analysis could be appropriate, and those would be times
when the inserted fragnets were representative of the contemporaneous understanding of
criticality.
CPA/MIP 3.3 (including the related bifurcated CPA/MIP 3.4) and TIA/MIP 3.7, each propose a
forward-looking modeled analysis wherein the contemporaneous understanding of criticality is
assumed, but must be proven. However, CPA/MIP 3.3 only assumes that the unimpacted CPM
network influenced this understanding, while TIA/MIP 3.7 assumes that both the fragnet and the
CPM network were influential. Of course, this is not always accurate. If the fragnet was
contemporaneously proposed and established, it is likely that the use of this fragnet in a TIA/MIP
3.7 is correct in its assumption that the party inserting the fragnet had a contemporaneous
understanding of criticality as projected by the fragnet and schedule recalculation. This assertion
could of course be disputed or refuted by the other party. However, if the fragnets are created
after the fact and were never considered by the project management team during project
execution, then it is unlikely that the TIA/MIP 3.7 in this case is representing any
contemporaneous understanding of criticality. In other words, it pretends that the on-site
management would see future events as the re-calculated after-the-fact schedule depicts them.
The impact is seen in the cumulative delay graphs in the way that more delay accrues earlier in
the TIA/MIP 3.7 graph. FIGURE 10 shows the MIP 3.3 and the MIP 3.7 graph for the period
between November 2010 and April 2011.
FIGURE 10: TIA/MIP 3.7 compared to CPA/MIP 3.3 for Nov-2010 to Apr-2011
Both cumulative graphs begin at the same point of delay, each calculating that the project was 76
CD behind schedule as of 1-Nov-2010. However, at the start of December 2010, TIA/MIP 3.7
calculates that the project is 100 CD behind schedule, compared to only 84 CD for CPA/MIP
3.3. The TIA/MIP 3.7 cumulative delay graph stays flat from 1-Dec-2010 to 1-Feb-2011, at
which point it begins to accumulate delay again. The authors reviewed the test schedules to
determine what the driving activities were during this window, and determined that in the 1-Nov-
2010 update schedule, the TIA/MIP 3.7 included a fragnet representing a differing site condition.
Similar to the fragnet insertion described in the CPA / MIP 3.3 section, above, the insertion of
the fragnet caused the sudden loss of 24 CD during as of 01-Nov-2011. In comparison, the
CPA/MIP 3.3 line identifies only an 8 CD delay during the same month, related to poor
contractor production. If the effects of the differing site condition fragnet are to appear in the
CPA, they will do so when the delayed start of the impacted as-planned activity (the same
activity that is the successor to the fragnet in the TIA) consumes that activity’s float and alters
the contemporaneous update’s predicted completion date. Again, assuming that the
contemporaneous schedules represent the contemporaneous understanding of criticality, this is
the point when the project management team would have begun to see this activity as critical.
This dichotomy reveals the heart of many disputes. One party uses a modeled technique that
“proves” that the critical path ran through an owner-caused differing site condition, while the
other party’s modeled technique “proves” that the problem was actually sustained poor
production. Particularly if the contractor is using the TIA/MIP 3.7 and the owner is using the
CPA/MPI 3.3, this argument can go on without resolution. However, the cumulative delay graph
highlights the timing of the delay accrual, which relates directly to the contemporaneous
understanding of criticality. The TIA/MIP 3.7 effectively alleges that, as of 1-Nov-2010 (or
reasonably close to that date) the contractor had identified the differing site condition, had
estimated the duration of time necessary to overcome the change in order to return to contract
work, and had perceived that the predicted completion date was delayed by 24 CD as a result.
These are the facts that must be proven to establish the propriety of the TIA/MIP 3.7’s
conclusions; without this, it is very easy to foresee scenarios when one party’s analyst simply
forward-impacts a CPM model with fragnets of the other party’s delays until the analyst’s client
apparently bears no responsibility for any delay. The TIA/MIP 3.7 line will simply stair-step
down through the project duration, claiming that delay accrued earlier than it actually did and
was always the responsibility of the other party. Identification of owner delays earlier than they
actually effect the ongoing work through the use of either TIA /MIP 3.7 or CPA/MIP3.3 can
result is seriously over estimation of owner delays. Therefore, for the purposes of establishing
that the TIA/MIP 3.7 graph is the appropriate measurement tool and that it should supersede the
other method’s graph for a given period, the analyst performing the TIA/MIP 3.7 should
establish the following: [25]
1. The analyst must confirm that the means and methods were accurately represented in the
contemporaneous update.
2. The analyst must confirm that the schedule was used to plan and execute the project, and
that the results of the CPM calculation influenced the contemporaneous understanding of
criticality.
3. The analyst must also confirm that as of the Data Date of the schedule (or reasonably
soon thereafter) the project management team became aware of the issue modeled in the
fragnet, that they impacted the schedule with the fragnet, and that the resulting shift in the
critical path and later predicted completion date influenced the project management
team’s contemporaneous understanding of criticality.
4. The analyst should also be prepared to discuss whether there was contemporaneous
pacing.
MIP 3.9: Collapsed As-Built
The Collapsed As-Built method develops a CPM model of the as-built schedule by creating logic
and durations that reflect the apparent logic that drove the work and the actual dates on which the
work was performed. The analyst then dissolves selected delay activities recalculates the
schedule in order to show what would have happened had a certain event not taken place. The
Collapsed As-Built method can either be performed in a single step (deleting all alleged delay
activities at once) or in multiple steps (removing one activity at a time and recalculating after
each deletion). A conceptual advantage to the Collapsed As-Built method is that the as-built
schedule contains both parties’ delays, so if the analyst removes only one party’s delays from the
schedule, the other party’s delays are still present. In other words, the Collapsed As-Built
naturally considers both parties’ delays while the other methodologies discussed in this paper
consider all the delays. Because of this characteristic, and for the purposes of this study, the
authors have performed the analysis subtracting BOTH the delays that are the responsibility of
the owners and those belonging to the contractor. This technique involves creating a series of
CPM schedules which were not used on the project, therefore it does not rely upon the
contemporaneous understanding of criticality.
For this analysis, the authors started with the test series’ Collapsible As-Built Schedule, and
dissolved each owner or contractor delay activity in turn, beginning with the activity with the
latest finish date and moving backwards. After each dissolution, the schedule is recalculated and
the change in the predicted completion date was recorded. As shown in FIGURE 3, after all the
owner delay activities were dissolved, the predicted completion date had shifted 31 CD earlier
than the actual finish. As such, these 31 CD were assigned to the owner, while the remaining 285
CD were assigned to the contractor.
The conclusion that could be drawn from the CAB analysis is that, but for the delays of the
owner, the contractor would only have finished 31 CD earlier. Note that the weather exclusion
period was not regained during the dissolution of the owner delay activities; therefore, it is
possible to conclude that regardless of the owner’s delays, the contractor would have
encountered the weather exclusion period’s jump in predicted completion date on its own. This
explains why the weather exclusion period days are assigned to the contractor in FIGURE 4.
Given the divergence between the cumulative delay graphs for the CAB, the authors have
attempted to find alternate implementation methods that might help in reconciliation of the
CAB’s results with those of the other methods. As discussed, the CAB measures delay in a
significantly different manner than the other three methods. First, it does not attempt to start at
the NTP date, where there were zero days of delay accrued, and work forward through each
window. Instead, it analyzes the project in reverse, starting with the actual number of delay days
accrued. Second, the method is designed in its normal application to specifically leave behind
one party’s delays. The authors therefore performed the collapsed as-built analysis again;
however, in this second implementation, the delays of both parties (including progress-related
delays) were dissolved in turn at each data date. The results of the second implementation are
shown in FIGURE 12.
FIGURE 12 : CAB/MIP 3.9 Both Parties delays removed compared CAB/MIP 3.9
with only Contractor’s delays removed
The second implementation shows project delays progressing backward towards (and ultimately
reaching) zero. This curve appears more similar to the other cumulative delay curves produced
by the other methods (see FIGURE 13).
adequacy of the collapsible as-built model. As previously stated, the authors acknowledge that
the creation of a collapsible model is one of the more difficult and sometimes controversial
aspect of performing MIP 3.9. Plotting the cumulative delay graph as described in FIGURE 12
may provide a useful back-check to the as-built logic applied by the analyst creating the
collapsible model.
Method Reconciliation
Again, it is commonly asserted that different methods applied to the same set of facts can result
in ostensibly different results. This is particularly true when examining apportionment of delay
resulting from two methods. For instance, in a TIA / MIP 3.7, inserted fragnets are only
representative of owner delays, As such, the fragnet insertion can radically shift apportionment
of delays when compared to the CPA / MIP 3.3. FIGURE 14 shows a comparison of the results
of the CPA / MIP 3.3 and the TIA / MIP 3.7, where the period delay is assigned to the party who
owned responsibility for the causal activity in that update schedule.
FIGURE 14 : Delay Responsibility Allocation - TIA / MIP 3.7 to CPA / MIP 3.3
It is clear that the fragnet insertion in the TIA / MIP 3.7 significantly shifts the apportionment of
delay in the schedules. The shift of perceived contractor-caused delay from 39% in the CPA /
MIP 3.3, as compared to just 13% in the TIA / MIP 3.7, is a major difference in the two analyses,
and would likely be the cause of argument between two competing analysts. Analysts performing
a CPM / MIP 3.3 on behalf of an owner would likely state that the TIA / MIP 3.7 is taking
advantage of the “stair-step” nature of that analysis to continually push the critical path just
beyond the influence of contractor-owned activities by constant insertion of owner-caused
fragnets. Given an understanding of the contemporaneous understanding of criticality, however,
the RTIA / MIP 3.7 could be a legitimate view of the project’s delays, if the fragnets were
understood at the time the delay is shown to have accrued, and the knowledge of the critical path
influenced the project management team’s actions.
Given the foregoing discussion, what use are the cumulative delay graphs in resolving
differences in analyses? We see two major cases: achieving agreement in periods where results
are substantially similar, and finding important points of discussion for resolution in periods
where results are in conflict.
METHOD RECONCILIATION IN CASES OF SIMILAR RESULTS
FIGURE 15 below places the results of the As-Planned As-Built / MIP 3.2 next to the results of
the TIA / MIP 3.7. Comparing the results of the two analyses, specifically for the three month
period highlighted in orange, the two analyses both show roughly the same loss.
The As-Planned As-Built / MIP 3.2 shows a 26 CD loss during this period, and a final
cumulative delay at the end of the period of 98 CD. In comparison, the TIA / MIP 3.7 shows a
loss of 24 CD during the same period, and an end of period delay of 100 CD cumulatively.
Moreover, the two schedules show that the same activity (#WD0070) driving the as-built critical
path for the entirety of this period. Assuming that both parties agree on which one is responsible
for the delays caused by this activity – and it is designed to represent an owner-caused delay in
this example – then the parties should be able to quickly agree on a negotiated settlement for at
least this period of the project. Comparison of the causal activities within each successive period,
along with a comparison of the amount of delay caused by the delay activity (as quantified in the
cumulative delay graphs) should allow quick agreement on all the periods in which there is
substantial agreement between the two analyses.
Should the parties agree on the causal activity and the magnitude, but not on the responsibility,
then the dispute shifts from one about competing FSA methods to one about the facts of the case
that are necessary to demonstrate responsibility for a given delay. In this case, though, it is
preferable to the FSA analyst community that the analyses themselves are not seen as unhelpful,
or even detrimental, to the resolution of the issues.
FIGURE 17: Example Period with Conflicting Results, As-Planned As-Built to TIA
The period highlighted in FIGURE 17 shows a two-month period where the As-Planned As-Built
/ MIP 3.2 shows a 16 CD loss overall, and the TIA / MIP 3.7 shows 35 CD of delay overall. The
difference in quantum of the delay during this period is already a cause for concern, and closer
inspection of the schedules shows the reasons. In the TIA, a fragnet was inserted into the over-
water resource path to represent a design error that affected the installation of the concrete spans.
This fragnet drives the critical path of the TIA / MIP 3.7 forward as a result; however, in the As-
Planned As-Built / MIP 3.2, the critical path flows through production delays in the over-land
resource path. In other words, the as-built critical path does not agree that the design error
fragnet ultimately delayed the project.
Now, refer to FIGURE 18, which compares the same period’s TIA / MIP 3.7 to the CPA / MIP
3.3.
So, which of the three critical paths is correct? Reconciliation of the methods, and finding the
correct answer, hinges upon determination of which graph is most appropriate, given the
contemporaneous understanding of criticality. If schedules were not used or maintained (and as a
result there is no contemporaneous understanding of criticality), the As-Planned As-Built / MIP
3.2 line is likely correct. If, however, the schedules were used by project management team, but
fragnet does not represent contemporaneous knowledge, the Contemporaneous Period Analysis /
MIP 3.3 line is likely correct. But, if the schedules were used and the fragnet does represent
contemporaneous knowledge and it influenced the project management team’s understanding of
what was driving the critical path, the TIA / MIP 3.7 line is likely correct. This highlights a key
point, and underscores the importance of acknowledging the contemporaneous understanding of
criticality as an essential part of method selection and implementation: the cumulative delay
graph highlights when delay either actually occurred (As-Planned As-Built / MIP 3.2 DDM) or
when it was perceived by the parties to have occurred (CPA / MIP 3.2 or the RTIA / MIP 3.7,
depending on the understanding of the fragnets). In order to prove that this perception represents
a valid means for viewing project delay, analysts should establish whether the contemporaneous
understanding of criticality can or cannot be assumed. It is possible to compare an analysis which
does assume and one which does not (such as MIP 3.2 to MIP 3.3, which is particularly
effective). In situations where the contemporaneous understanding of criticality cannot be
established, it may be necessary to eliminate certain methods from consideration. Analyses
developed outside of standards of good practice will likely show radically different results in the
cumulative delay graph, in which case the graph will be helpful in identifying such
inconsistencies in the opposing analyst’s work and refuting his or her analysis.
CONCLUSIONS
The use of the cumulative delay graph can be a useful tool in reconciling the apparently different
results of methods. It is particularly useful when used as part of a larger process of putting the
results of the methods into a common format and a collaborative effort between the parties to
establish periods of similarity and differences. The cumulative delay graph will aid in
establishing when delays accrued; it will not, however, resolve disputes where the causal activity
is agreed upon but the underlying reason for delay is at issue.
Generally speaking, the As-Planned vs. As-Built/MIP 3.2 DDM line establishes when the delay
actually occurred. The CPA/MIP 3.3 line tends to show that delay accrues slightly earlier than
the As-Planned vs. As-Built/MIP 3.2 DDM line, because the CPA/MIP 3.2 is calculating the
delay to the predicted completion date based on the unedited CPM network alone. The TIA/MIP
3.7 tends to show that delay accrues earlier than the CPA/MIP 3.3, because the TIA/MIP 3.7 is
calculating delay to the predicted completion date based on the CPM network as impacted by
fragnets. A longer fragnet will tend to claim more delay earlier. And although the standard
implementation of the Collapsed As-Built / MIP 3.9 results in a significantly different
cumulative delay graph than the other methods (as is expected, given the backwards approach of
the analysis) it is possible to use an alternate implementation of the collapsed as-built that deletes
both the owner and contractor delays in turn to generate a cumulative delay graph. In this
analysis, that graph generally mirrors the path of the CPA / MIP 3.3 and the TIA / MIP 3.7,
though it tends to lead both curves by a number of days.
The cumulative delay graph highlights when delay either actually occurred, as in the As-Planned
vs. As-Built/CIP 3.2 DDM line, or when it was perceived by the parties to have occurred, as with
the CPA/MIP 3.2 and the TIA/MIP 3.7. In order to prove that this perception represents a valid
means for viewing project delay, analysts should establish whether the contemporaneous
understanding of criticality can or cannot be assumed. However, it is still possible to compare an
analysis which does assume and one which does not (such as MIP 3.2 to MIP 3.3, which is
particularly effective). But in a situation where the contemporaneous understanding of criticality
cannot be established, it may be necessary to eliminate certain methods from consideration.
Finally, analyses developed outside of standards of good practice will likely show radically
different results on this chart. Therefore, the use of this technique can help refute the technical
implementation of the opposing expert’s analysis.
KEY WORDS
Schedule Delay
Delay Methodology
ENDNOTES
1 Bruner, P. and O’Connor, J., Bruner & O’Connor Construction Law, West
Thompson Reuters, New York., Volume 2, § 15.130 p. 348 (2007). The authors
state: “Proof of what activities were ‘critical’ to timely completion at any point
in time is no easy task because the critical path is dynamic and accommodates
and adjusts to ever-changing job conditions.”
2 Bruner, P. and O’Connor, J., Bruner & O’Connor Construction Law, West
Thompson Reuters, New York., Volume 2, § 15.133 p. 352 (2007)
19 Wickwire, J., Driscoll, T., Hurlbut, R., and Hillman, R., Construction
Scheduling: Preparation, Liability and Claims, 3rd edition, Aspen
Publishers, § 15.06 p. 651 (2010)
20 Wickwire, J., Driscoll, T., Hurlbut, R., and Hillman, R., Construction
Scheduling: Preparation, Liability and Claims, 3rd edition, Aspen
Publishers, § 9.05 p. 268 p. 651 (2010). The seminal text explains the primacy
of understanding the contemporaneous understanding of criticality within a
forensic delay analysis: “Delays are best evaluated on a chronological and
cumulative basis, taking into account the status (and critical path[s]) of the
project at the time of the delay in question. With this method and protocol, all
parties on the project live with the events, actions, and ‘sins’ of the past.”
21 In a CPA, wide windows (greater than a month) are undesirable. One of the
major benefits of a CPA is to track the movement of the critical path, which is
known to be variable based on progress and evolving means and methods. Using
wide windows opens the possibility that the critical path will undergo multiple
shifts during the window and will not be cataloged by the analyst. This would
allow delays to be misallocated to specific events and parties. A month-long
window is usually the maximum width of a window because of the fact that the
pay applications – a useful back-check on the state of progress to date – are
generally submitted on a monthly basis.
27 A. Ness, A., “Experts and Expertise in Construction: Black Letter Law and the
Debate of Whether Scheduling /Programming Experts are Imposters - Its All
Smoke and Mirrors.” Conference of the International Bar Association, Dublin,
(1-Oct-2012)