J Omega 2014 06 001
J Omega 2014 06 001
J Omega 2014 06 001
www.elsevier.com/locate/omega
PII: S0305-0483(14)00074-7
DOI: http://dx.doi.org/10.1016/j.omega.2014.06.001
Reference: OME1408
Cite this article as: Jeroen Colin, Mario Vanhoucke, Setting tolerance limits for
statistical project control using earned value management, Omega, http://dx.doi.
org/10.1016/j.omega.2014.06.001
This is a PDF file of an unedited manuscript that has been accepted for
publication. As a service to our customers we are providing this early version of
the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting galley proof before it is published in its final citable form.
Please note that during the production process errors may be discovered which
could affect the content, and all legal disclaimers that apply to the journal
pertain.
Setting tolerance limits for statistical project control using earned
value management
a Faculty of Economics and Business Administration, Ghent University, Tweekerkenstraat 2, 9000 Gent
(Belgium)
b Technology and Operations Management, Vlerick Business School, Reep 1, 9000 Gent (Belgium)
c Department of Management Science and Innovation, University College London, Gower Street, London WC1E
6BT (United Kingdom)
Abstract
Project control has been a research topic since decades that attracts both academics and prac-
titioners. Project control systems indicate the direction of change in preliminary planning vari-
ables compared with actual performance. In case their current project performance deviates from
the planned performance, a warning is indicated by the system in order to take corrective ac-
tions.
Earned value management/earned schedule (EVM/ES) systems have played a central role in
project control, and provide straightforward key performance metrics that measure the devia-
tions between planned and actual performance in terms of time and cost. In this paper, a new
statistical project control procedure sets tolerance limits to improve the discriminative power be-
tween progress situations that are either statistically likely or less likely to occur under the project
baseline schedule. In this research, the tolerance limits are derived from subjective estimates for
the activity durations of the project. Using the existing and commonly known EVM/ES metrics,
the resulting project control charts will have an improved ability to trigger actions when variation
in a project’s progress exceeds certain predefined thresholds
A computational experiment has been set up to test the ability of these statistical project control
charts to discriminate between variation that is either acceptable or unacceptable in the duration of
the individual activities. The computational experiments compare the use of statistical tolerance
limits with traditional earned value management thresholds and validate their power to report
warning signals when projects tend to deviate significantly from the baseline schedule.
Keywords: Project Management, Scheduling, Risk, Simulation
1. Introduction
Project management and control have been research topics since the development of project plan-
ning approaches such as the critical path method (CPM, [1, 2]) and the Program Evaluation and
Review Technique (PERT, [3]). The majority of the research endeavors published in the academic
literature focuses on the construction of a project baseline schedule within the presence of limited
resource constraints (see e.g. a recent survey written by Hartmann and Briskorn [4]). Ever since,
the use of a project baseline schedule has been put into the right perspective, as it only acts as a
∗ Corresponding author
Email addresses: [email protected] (Jeroen Colin), [email protected] (Mario Vanhoucke)
2
using Monte Carlo simulation to imitate a fictitious project progress environment are discussed.
Section 6 draws general conclusions and highlights future research avenues.
This literature review introduces the main concepts of statistical process control (SPC) as they
are applied to monitor and control processes in services and manufacturing. The application of
SPC techniques to earned value management is not new in literature. We introduce a statistical
project control procedure that aims to overcome some of the issues with the previously published
approaches, but which is not an implementation of SPC in the strict sense, although it repro-
duces some of its concepts and nomenclature. The last paragraph of this literature review is
dedicated to a summation of the differences between our statistical project control procedure and
the implications of applying SPC to project control in a strict sense.
In standard SPC applications a state of control is identified with a process generating samples
for which the subgroup averages are approximately normal under the central limit theory. Con-
trol charts such as the Shewhart, cumulative sum (CUSUM) and exponentially weighted mov-
ing avarage (EWMA) charts serve as on-line procedures to monitor process stability, to detect
assignable variation or to forecast process movements in industrial processes [14]. A process
is said to be in-control when only common cause variation is present. This type of variation
is characterized as coming from phenomena constantly active in a process, which can be pre-
dicted probabilistically. In his original work on process control, Shewhart [15] introduced the
term chance cause. A process is said to be out-of-control if a second type of variation is present
known as assignable cause variation. Assignable cause variation arises when a new, previously
unanticipated phenomenon is present in the system and should cause a signal.
Figure 1 illustrates how a control chart applied to project control might be interpreted. Periodic
observations of a performance measure for a fictitious project execution are outlined, along with
an illustrative upper control limit (UCL) and lower control limit (LCL). For illustrative purposes,
we will assume that the performance measure is of the type where a high value is good and a
low value acts as a trigger for actions due to potential problems. The frequently encountered
schedule performance index (SPI, section 2.3.1) and schedule performance index using earned
schedule (SPI(t), section 2.3.1) measures in EVM/ES are of this type, although control charts of
the opposite type have also been proposed where the natural logarithm of the reciprocal of SPI or
SPI(t) is taken [16]. In case the observations fall outside the project control limits, the charts report
a signal which could trigger actions to bring the project back to plan or to exploit opportunities.
More precisely, when an observation falls below the lower control limit, the performance has
likely dropped below the pre-defined acceptable margin on the baseline estimates. In this case,
this signal should be interpreted as a warning signal to the project manager to consider taking
corrective actions. When the performance indicator exceeds the upper control limit, it can be
seen as a signal to exploit opportunities, in which case the current project schedule might be
re-baselined to incorporate this opportunity. Different SPC control charts have been implemented
for project control using earned value management by [16–21]. A short overview of their respective
approaches and findings is listed below.
Lipke and Vaughn [17] apply a XmR chart (individuals and moving range chart) and calculate the
control limits based on a two-observation sample of earned value performance metrics. They use
the reciprocal measures of CPI (cost performance index) and SPI (schedule performance index),
as they have shown that these are normalized, such that many of the issues of applying SPC,
such as variability and homogeneity of the data, are avoided. Bauch and Chung [18] modify the
statistical process control charts proposed by Shewhart to be used with earned value management
considering three major aspects. Firstly, they asses single observations relative to historical project
3
OutControl chart
of control signal
indication
→opportunity
→Re-baselining?
UCL
LCL
Out of control
Control chart indication
signal
→danger
→Take corrective actions
data from 20 similar projects. Secondly, as projects span a finite time horizon, which is typically
different for each project, they normalize the different time lengths of each project into a consistent
number of time periods, chosen as 20 discrete measurements for the purpose of the paper. As a
third modification, they adopted the CUSUM approach [22] to incorporate the increasing nature
of period-to-period measurements of project performance. The typical y-axis of control charts
is transformed to plot cumulative project performance values. As a result, the central line and
upper and lower limits are non-stationary. Wang et al. [19] illustrate the use of SPC charts
using EVM/ES on a set of more than 30 software projects where abnormal progress situations are
effectively detected. [20] combine XmR charts with EVM/ES performance metrics, but implement
the logarithm of the reciprocal of SPI and the cost performance index (CPI). This was earlier
proposed by Lipke [16]. Control limits are calculated from historical data of 120 projects performed
by a management consulting company. Leu and Lin [20] introduce the division of different cases
based on historical factors influencing project performance and quantify the XmR chart procedure
for different trends observed in their data. Moreover, they accurately describe how “the quick
proliferation and complexity of project performance data indicate the need for a well-organized
project performance analysis process.” Aliverdi et al. [21] suggest to transform CPI data when
quality control charts are applied for cost control. SPC charts for both individual and moving
range measurements were successfully implemented. They conclude that the integrated approach
of EVM/ES and statistical quality control charts can contribute to a better and more reliable
project control process.
Overall, previous research on the integration of EVM/ES with SPC techniques for projects has
established that it improves project control by providing an objectively based and easily imple-
mented real-time monitoring system. The evidence provided in the literature is based on empirical
research of either post-hoc statistical analyses on projects [19] or the implementation of control
limits derived from historical data [16, 20]. The dependency on historical data does not pose an
immediate problem for statistical project control approaches, ideally the use of historical records
is preferred over subjective estimates for project control [23]. However, the concept of “similarity”
between projects, is often vaguely defined. When historical EVM/ES data is directly applied, the
project-specific EVM/ES dynamics should not be ignored. [24] illustrated these project-specific
dynamics with EVM/ES forecasting becoming less reliable for projects with a network close to
a parallel network structure. When the concepts of SPC are directly translated to incorporate
EVM/ES measurement, an assumption with respect to their distribution has to be made. In
previous literature this prerequisite was fulfilled by transforming the data to appear normally
distributed [17, 19] or lognormally distributed [16, 20] or appeared overlooked [21].
In section 2.3 we propose a different statistical project control procedure that will also include
4
the appealing aspects of using control charts. They are easy to set up, implement and interpret.
However, the imposed tolerance limits on the earned value metrics should be interpreted in the
more technical sense [25]. The tolerance limits are acquired from the empirical distribution function
(edf) of the EVM/ES metric under study at different points along the life-cycle of a project. This
edf is obtained from a Monte Carlo simulation model that allows variation to be added at the
activity level of the project. This variation can be introduced as subjective estimates of what
deviations are statistically plausible during the execution of the project or from calibration with
historical in-company data.
Table 1: Difference between SPC for projects and our statistical project control procedure
Table 1 summarizes the main differences between our statistical project control procedure that
uses tolerance limits and a classical SPC implementation for project control. The main aspects
are given along the following lines.
• Data input: A first fundamental difference lies in the data input necessary to construct the
control limits. While process data needs to be collected at regular time intervals to monitor
the behavior of a process in progress when classical SPC is implemented, our control charts
rely on inputs of estimates that express the desired tolerance on the activity durations or
allows calibration to historical data at the activity level. Consequently, these inputs are
used to express an ex ante desired state of outcome on activity durations rather than an
observed ex post outcome in the measured variable. For this reason, the activity distribu-
tion inputs can and probably will be different than the real activity distributions, and the
tolerance limits are set up to detect these differences and to receive timely warnings when
real activity durations exceed the desired thresholds. Unlike classical applications of SPC
to project control, which aim at detecting deviations from the average on-going state of
the measurable variable in progress, the project control charts using tolerance limits aim at
detecting deviations from a predefined desired state in the variable defined before the start
of the progress.
• Dynamic project performance measurements: Given the ex ante approach of setting tolerance
for project control, the procedure requires a large sample of data points to predict the impact
of the acceptable activity tolerances on the tolerance limits set for EVM/ES. Therefore, the
data is collected using Monte-Carlo simulations using the desired tolerances on the activity
distributions and measuring their impact on the EVM/ES output metrics such as SV, SPI,
SV(t) and SPI(t). Consequently, these output metrics are used to construct tolerance limits
given the desired input parameters. This is in high contrast to the on-going data collection
method of observed process data used to construct control limits to detect deviations from
normal or average process behavior in a classical application of SPC.
• Control limits: Due to the fundamental difference between the control limits of SPC for
project control and the statistical tolerance limits based on the desired activity duration
tolerances of our project control approach, we have decided to use the words tolerance limit
5
rather than control limit during the construction of the control charts. Consequently, the
control limits set on the EVM/ES output metrics SV, SPI, SV(t) and SPI(t) are used to define
the thresholds on the tolerances in project performance behavior given the desired tolerances
on the activity durations and hence act as warning signals to measure unacceptable project
progress behavior compared to the baseline schedule. This is exactly what an EVM/ES
system is trying to accomplish and this view is consistent with the definition of a project
control system [9]. Project based performance metrics along the project progress are used to
act as an action threshold to detect underlying problems that are responsible for the deviation
between planned outcome and project progress outcome. The tolerance limits used in this
paper define these action thresholds based on a statistical analysis of the simulated data
resulting from a pre-specified allowable variation.
6
2.3. Project control charts
Monte Carlo simulations. Monte Carlo simulations in project management and control have been
used widely in literature. Since its introduction [26] to analyze project networks, the methodology
has been used for a range of project management research projects and applications. In [27],
a classified bibliography of project risk management research is given where simulation plays a
central role. Ever since, Monte Carlo simulation studies have been a respected methodology in
project management and scheduling. The simulation studies in this paper are based on the project
control simulation studies described in [28, 29]. The methodology used has been initially developed
by Vanhoucke and Vandevoorde [24] and is formalized in a dynamic scheduling simulation tool,
P2 Engine [30].
In the current paper, the simulation runs will be used to simulate fictitious project progress in order
to obtain periodic project performance data using the well-known project performance indicators
obtained by the EVM/ES key metrics. Consequently, fictitious project progress is simulated for
a number nrs of Monte Carlo runs. Variation in the activity durations will have an impact on
the project performance over time and therefore, the variable Xsp will be used to denote the
performance indicator value from simulation run s (s = 1, . . . , nrs) measured at a certain moment
in time between the start and the finish of the simulated project progress when it is p percent
completed (p = (1, . . . , P )×∆PC). A total of P observations are taken with increment ∆PC.
The following performance indicators are periodically measured during the simulation runs
SV Schedule variance (SV = EV - PV)
SPI Schedule performance index (SPI = EV / PV)
SV(t) Schedule variance using earned schedule (SV(t) = ES - AD)
SPI(t) Schedule performance index using earned schedule (SPI(t) = ES / AD)
with
PV Planned value
EV Earned value
AC Actual cost
ES Earned schedule
AD Actual duration
BAC Budget at completion
PC Percentage project completion (PC = EV / BAC) (0% ≤ PC ≤ 100%)
7
P Number of observations taken, with an increment ∆PC
For the sake of completeness, we even include SV and SV(t) in our research. Previous EVM/ES
studies using SPC often neglected these performance metrics and thus very few empirical evidence
for their distributions is known. Since we do not assume any distribution for the control measures
a priori, we can incorporate them into our research. Note that, we do not incorporate the cost
control metrics cost variance (CV = EV - AC) and cost performance index (CPI = EV / AC). In
our simulation model, cost is always strongly correlated with the time-performance of the activity,
which might not necessarily be the case in real-life. Since our focus is on controlling the time
performance of a project, we restrict the analysis to those metrics specifically designed to do
so.
Add tolerance limits. The simulated data is used to construct control limits for the project control
variables Xp =(SV, SV(t), SPI, SPI(t)) at a review period p for two distinct types of control
charts. The proposed X chart and the R chart resemble the Shewhart individuals (indX) and
moving range (mR) control charts in their ability to inspect individual observations [31]. Control
limits for the Shewhart charts are constructed based on the moving average over the range (i.e.
the difference between an observation xi and its predecessor xi−1 ), calculated using the previous
m − 1 observations.:
m
= 1
MR |xi − xi−1 | (1)
m − 1 i=2
This concept of moving range is not adopted here to construct tolerance limits for project control,
since it implies that observations are taken for a consistently operating process. On the contrary, we
assume that our nrs simulation runs provide sufficient observations to reconstruct the distribution
function for the variable Xp at a review period p ∈ (1, . . . , P ) × ∆PC. Note that we do not assume
two random variables Xj and Xk (where j, k ∈ (1, . . . , P ) × ∆PC) to be independent, but we will
construct tolerance limits for each of them independently. For any two adjacent review periods,
we will look at the range Rj = |Xj − Xj−1 | (j ∈ (1, . . . , P ) × ∆PC) and treat this variable as a
new project performance variable.
• The X chart is used to monitor individual observations for the project performance variables
Xp at a review period p. Tolerance limits can be calculated from the empirical distribution
function F̂Xp (t).
nrs
1
F̂Xp (t) = 1{Xsp ≤ t} (2)
nrs s=1
By the strong law of large numbers, we know that the emperical distribution function (2)
asymptotically converges to the real cumulative distribution function F(t) for all possible
values of t.
The lower and upper statistical tolerance limits at a review period p for a level α (LT Lα Xp
and U T LαXp ) are then calculated as the α
th
and (1 − α)th quantile of the distribution F(t).
The quantiles can be calculated from the inverse of the empirical distribution function or
by estimation of the sample quantiles. Let Q̂(α)p for 0 < α < 1 denote the sample quan-
tile based on a a set of independent observations {X1p , X2p , . . . , Xnrs p } from the distribu-
tion F at review period p. If the order statistics of {X1p , X2p , . . . , Xnrs p } are denoted as
8
{X(1)p , X(2)p , . . . , X(nrs)p }, then Q̂(α)p can be estimated as:
The value for the level α can be chosen subsequently with respect to a total level of control
over al P review periods of the project. We will illustrate this in section 5.1.
• The R chart monitors the range R (or difference) between two adjacent observations. This
results in tolerance limits for a vector of observations which is in length one less than those
for the X chart. The empirical distribution function F̂Rp (t) for R at a period p can be found
using nrs simulation runs.
nrs
1
F̂Rp (t) = 1{Rsp ≤ t} (6)
nrs s=1
with:
where Q̂(α)p is calculated with the order statistics {R(1)p , R(2)p , . . . , R(nrs)p }. The intuitive
explanation for the use of the range chart is that the difference between two measurements
of project performance relates to the instantaneous change of the performance. Since a
project consists of activities that begin and end at discrete time instances, the transformed
variable Rp should better reflect the performance of activities that are being executed at
that time instance and the project control process is less cluttered by the performance of
past activities.
9
the liberty to present our control charts in such a way. Control charts and hypothesis tests can
be regarded as alternatives and the analogy is as presented below:
The null and alternative hypotheses of the project schedule control procedure proposed in this
paper are
H0 : The project is executed as planned
Ha : The project is executed not as planned
This hypothesis is tested using periodic control charts, by checking whether the current project
performance Yp lies within the control limits or not, as follows
Accept H0 : LT Lα α
Xp ≤ Yp ≤ U T LXp
Accept H0 : LT Lα α
Rp ≤ |Yp − Yp−1 | ≤ U T LRp
3. Illustrative example
This section illustraties the calculations to construct statistical project control charts based on
a fictitious project network example. Figure 2 displays the fictitious illustrative project network
from [12] with 10 non-dummy activities. The numbers above each node are used to display the
predefined baseline duration (in days) while the number below each node denotes the baseline
total activity cost.
4 4 8
2 5 9
80 80 8
0 9 3 0
1 3 10 12
36 6
7
0 0
1 5 1 8
21
4 6 7 3
25 20 120
11
60
Table 2 displays five simulation runs (nrs = 5) where the activity durations have been randomly
chosen from a uniform distribution with a maximum relative deviation from the baseline duration
equal to 30%. The first row of table 2 shows the limits between which the fictitious activity
durations can be chosen.
Table 3 displays the earned value metric EV along the project duration (in days). It is assumed
that all activities follow a linear earned value accrue and, consequently, variation in an activity
duration has a linear impact on the actual activity cost.
10
Table 2: Normal activity variation (in days) from five simulation runs
Activity 2 3 4 5 6 7 8 9 10 11 RD
cv
c±δ 3-5 6-12 1-1 3-5 4-6 1-1 5-9 6-10 2-4 2-4
1 3 8 1 5 4 1 9 6 2 4 15
2 5 6 1 5 6 1 7 6 3 3 16
3 4 7 1 4 5 1 6 7 2 3 15
4 4 12 1 3 5 1 9 8 4 2 16
5 3 11 1 3 4 1 8 10 2 4 16
Table 3: The Earned Value (EV) and Planned Value (PV) along the project duration
Scenario 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 56 92 129 154 180 320 358 396 417 439 443 446 450 454 456
2 47 72 98 123 148 174 195 333 374 413 437 442 446 450 455 456
3 50 79 107 136 164 193 221 364 392 416 441 446 450 455 456
4 48 75 102 129 163 196 346 382 419 425 431 438 442 447 452 456
5 55 90 105 160 195 225 349 385 422 429 436 442 448 452 455 456
PV 49 77 105 133 161 189 333 380 408 434 440 446 450 454 455 456
11
Table 4: The SPI along the first fictitious project execution’s duration
Time 1 2 3 4 5 6 7 8
EV 56 92 129 154 180 320 358 396
PC 12.3 20.2 28.3 33.8 39.5 70.2 78.5 86.8
PV 49 77 105 133 161 189 333 380
SPI 1.14 1.19 1.23 1.16 1.12 1.69 1.08 1.04
Time 9 10 11 12 13 14 15 16
EV 417 439 443 446 450 454 456 456
PC 91.4 96.3 97.1 97.8 98.7 99.6 100 100
PV 408 434 440 446 450 454 455 456
SPI 1.02 1.01 1.01 1 1 1 1 1
12
3.1. X chart
To calculate the tolerance limits for the X chart, we need to find the first and last α quantiles for
Xp at all P = 9 instances. This illustrative example only includes the SPI as a control variable.
For more details regarding the calculations of the other earned value metric (SV) and the earned
schedule metrics (SPI(t) and SV(t)), we refer the reader to the detailed work by Vanhoucke [12]
and the original work on earned schedule by Lipke [33].
Figure 3 illustrates the tolerance limits obtained for an X chart for SPI. The central line (CL)
is added for illustrative purposes and is the mean SPI along the project (X̄p ), the outer lines
represent the upper and lower tolerance limits (LT Lα α
Xp , U T LXp , for α = 10%) and the dots
illustrate how measurements made on a new run (with durations that exceed the predefined 30%
margins) are categorized as not as planned as soon as its value exceeds a tolerance limit.
0.4
1.4
U T L10%
Xp
0.3
1.2
∆SPI
SPI
0.2
U T L10%
1.0
X̄p Rp
LT L10%
Xp R̄p
0.1
0.8
LT L10%
Rp
0.0
PC PC
Figure 3: X chart control limits for the illustrative Figure 4: R chart control limits for the illustrative
example example
3.2. R chart
The R chart investigates the difference between two consecutive measurements. To illustrate
the procedure, the SPI values from table 5 are transformed into the ∆SPI values of table 6.
Afterwards, the sample mean (Rp ) and the quantiles are calculated from table 6. Figure 4 depicts
the R control chart with the same run outlined as figure 3. This illustrates how both charts can
show completely different dynamic behaviour of the calculated control limits, but can be read and
processed in entirely the same way.
In this section, the data generation process to create a project benchmark set, the methodology
used, as well as the main research questions set during the computational experiment in order to
validate the project control approach are explained along the following subsections.
13
Table 6: The ∆SPI for PC intervals after interpolation
PC 20% 30% 40% 50% 60% 70% 80% 90%
1 0.08 0.02 0.08 0.18 0.19 0.19 0.62 0.04
2 0.03 0.01 0.13 0.13 0.1 0.09 0.06 0.04
3 0.01 0.01 0 0.35 0.1 0.1 0.09 0
4 0.01 0.01 0.05 0.01 0 0 0.02 0.01
5 0.06 0.04 0.09 0.02 0.05 0.06 0.05 0.01
Rp 0.04 0.02 0.07 0.14 0.09 0.09 0.17 0.02
LT L10%
RP 0.01 0.01 0.02 0.01 0.02 0.02 0.03 0
U T L10%
RP 0.07 0.03 0.11 0.28 0.15 0.15 0.41 0.04
The X and R project control charts are tested on a set of 900 fictitious projects generated by a
project generator RanGen [34, 35] that has been used previously in project scheduling and control
studies. Vanhoucke [28] uses this dataset to connect network topology information to the optimal
control procedure found to monitor the time performance of a project.
The 900 projects are chosen in accordance to the serial/parallel (SP) indicator originally proposed
by Vanhoucke et al. [36] and used in previous project control simulation studies [24, 28, 29]. The
SP indicator measures how close a project matches a complete serial or parallel project and is based
on the maximal progressive level concept of Elmaghraby [37]. If m is the maximal progressive
level, it is defined as the maximum number of activities lying on a single path in the project.
If n is the number of activities in the project, it is clear that for a serial project m = n. The
SP = (m − 1)/(n − 1) then equals 1 for a serial project. If the largest path in a project consists
of only 1 activity, the project network is said to be parallel and SP = 0. The project dataset
consists of 100 projects for 9 intermediate SP values between 0 and 1 (SP = {0.1, 0.2, . . . , 0.9}).
If conclusions are formulated regarding a mean value over all 900 projects, it is conjectured that
they are general enough to extrapolate over all possible project networks. In the results section 5,
we did not find a significant influence of such serial/parallel network structure on the performance
of our control chart to monitor the schedule performance of a project. This result shows that our
statistical project control approach provides a more robust solution than regular use of EVM/ES,
which was found to improve notably for more serial project networks [28].
4.2. Methodology
Monte Carlo simulations are used in the static phase of our statistical project control procedure
to construct the tolerance limits used in the X and R charts, as well as to emulate real project
executions in the dynamic phase where fictitious project progress will be classified using these
charts. A distinction is made between both phases in the form of the variation present at the
activity level.
In order to generate fictitious project executions in both the static and dynamic phase of the Monte
Carlo simulation, we produce activity durations under interactive and compound uncertainties
[38]. Therefore, two sources of uncertainty are applied. In section 4.2.1, variation is modelled as
probability distributions that are applied to the duration of the project’s activities. This variation
will result in fictitious project executions that are either as planned or not as planned. In section
4.2.2, we discuss the potential dependencies between a project’s activities and how project progress
can be affected by uncertain events.
For the construction of the tolerance limits we will allow only variation that is acceptable at the
activity level. The projects are executed as planned, which represents the desired state of schedule
control in the presence of uncertain project-wide events. In the dynamic phase, fictitious project
14
progress is generated for which it is possible that the durations of the activities exhibit unacceptable
variation. These fictitious project executions are then used to quantify the performance of the
proposed control charts in their ability to accurately categorize whether the plan is still adequate
or not. Consequently, the project control charts are used to test whether the fictitious project
executions are being executed as planned or not as planned.
As planned project progress. Variation is added to the baseline duration dˆi of an activity i in
the project by a uniform distribution. We define the as planned project schedule executions
in the static phase using the lightgrey probability distribution functions presented in figure 5,
characterised by a choice for the maximum allowed deviation from the baseline estimate duration
δap . This distribution is presented relative to the baseline duration. The Monte Carlo samples
that are drawn from this distribution therefore need to be multiplied with the baseline duration
dˆi , in order to model the duration di of an activity i in a fictitious execution of the project.
The maximal relative deviation δap from the baseline estimate duration represents the subjective
estimate of the variation on the baseline schedule that is regarded as planned.
Detect both opportunities and dangers. In the dynamic phase, project progress situations are
simulated for which the baseline schedule might no longer be adequate. In order to simulate this,
variation is added using a not as planned uniform probability distribution that has a standard
deviation larger than that of the as planned uniform distribution. Figure 5 shows two illustrative
examples of this not as planned uniform distribution, hatched in black, for a low and a high value
of %Anp . The variable %Anp expresses the percentage of the total number of activities N that
has a duration that is not confined within the as planned margin δap . %Anp is therefore proposed
to provide an intuitive basis for creating not as planned project progress situations. The larger
%Anp , the more the project progress can be regarded as going not as planned and warning signals
generated by a project control procedure becomes more desirable.
15
Uniform probability
distribution functions
as planned
not as planned
1 − δnp0-
1 − δap
1
1 + δap
1 + δnp+
0 = 1 − δnp-
1 − δap
1
1 + δap
1 + δnp+
low %Anp high %Anp
The as planned parameter δap and the not as planned parameter %Anp will be set in our experiment
according to the values specified in table 7. From the δap value, the extremes of the (lightgrey) as
planned uniform probability distribution are calculated directly (1 − δap and 1 + δap ). Indirectly,
the extremes of the (hatched in black) not as planned uniform distribution function (1 − δnp−
and 1 + δnp+ ) can be found using equations 11 and 12. We will discuss how these extremes are
calculated along the following lines.
Let us assume that the N activities in the project have identical probability distributions and that
they are sampled independently. The reader should consider the fact that we add dependencies
through linear association in section 4.2.2. Let pnp denote the probability of drawing a value
outside of the as planned margin. Consider this a success/failure or coin toss experiment where
a draw from outside the as planned margin represents a success. The probability of drawing
a number of successes Nnp larger than %Anp N from a total of N is given by the binomial
distribution.
P[Nnp > %Anp N ] = 1 − P[Nnp ≤ %Anp N ]
%Anp N
N!
=1− pi (1 − pnp )N −i (11)
i=0
i! (N − i)! np
In order to calculate pnp from equation 11, we have to assign a value to its left-hand side. This
value represents the chance that a minimum of %Anp N successes is drawn. We chose this value
as 95% in our experiments, in accordance with a probability of 5% that a fictitious project with
less then %Anp N successes is simulated. For a simulation of 1, 000 runs, an average of 50 of
such fictitious projects can then be expected to be included in the output. From equation 11, we
proceeded with a numerical approximation to calculate pnp .
pnp = P[|D − 1| > δap ]
2δap
=1− (12)
δnp+ + δnp−
From pnp and δap , the extremes of the not as planned uniform distribution (represented using
δnp− and δnp+ ) can then be found from equation 12, where D denotes the random variable that
is drawn.
16
In order to derive a value for both δnp− and δnp+ , an assumption has to be made with respect
to one or both of these unknowns. We chose to implement a linear relation (δnp+ = 1.5 δnp− )
which expresses the tendency of project activities to be late rather than early. However, in the
cases where this relation resulted in a negative value for the lower extreme of the distribution
(1 − δnp− < 0), δnp− was fixed at 1 and the only unknown δnp+ was found from equation 12. The
factor 1.5 shown for the linear relation is arbitrary and different choices were also implemented
during our tests.
Table 7 gives an overview of the simulation experiments conducted for this research. A total of
400 different simulation experiments were run for each project network according to the inputs for
the parameters µB , cvB , δap and %Anp provided in table 7. These parameters define respectively
to what extent dependencies between activities are incorporated or not and the as planned and
not as planned variation . It should be noted that although the independence assumption is
retained for some scenarios (denoted as cvB = 0), a constant B is still used in these to alter
the mean (µB ∈ {0.8, 1, 1.5, 2}). The 400 experiments produce 1,000 runs for all 900 projects
(section 4.1), for which the baseline estimates and costs are assigned according to the first column
of table 7. Periodic EVM/ES reports are generated at 19 distinct moments in the project (p =
17
Table 7: Overview of the computational experiments
Baseline Simulation model Settings
as planned not as planned
Estimates
nrs = 1, 000
Uniform
µB ∈ {0.8, 1, 1.5, 2}
∀i ∈ N : dˆi ∈[8;56]
Independence cvB = 0
δap ∈ {20%, .., 80%} δap ∈ {20%, .., 80%}
Fixed Cost
%Anp = 0 %Anp ∈ {20%, .., 80%}
Uniform
[0; 500]
nrs = 1, 000
µB ∈ {0.8, 1, 1.5, 2}
Variable Cost
Linear association cvB ∈ {0.3, 0.8, 2}
Uniform
δap ∈ {20%, .., 80%} δap ∈ {20%, .., 80%}
[700; 1500]
%Anp = 0 %Anp ∈ {20%, .., 80%}
{5%, 10%, 15%, 20%, . . . , 90%, 95%}) and interpolation is done as shown in section 3 with a time-
increment chosen sufficiently small. The durations of the activities were expressed in days, whereas
the time-increment was in the order of hours.
We will use these experiments to demonstrate the value of statistical project control for project
management in section 5 using the control efficiency measures defined in section 4.4. We have
tried to incorporate the best practices for project control modelling as provided by the current
literature, to produce results that are as generalisable to project management practice as possible.
However, we will expand on the limitations of our research, and its results, in the concluding
section 6.
The detection performance. The detection performance equals the frequentist approximation that
a certain not as planned situation will result in a signal during the dynamic phase, given the
as planned data from the static phase. A signal is generated as in section 2.3.2 if the project
performance Yp or |Yp − Yp−1 | is not confined to the tolerance limits of respectively the X or R
chart at any review period p.
The detection performance is calculated as:
nrs
1
1s {∃p|(Yp < LT LαXp ) ∨ (Yp > U T LαXp )} (13)
nrs s=1
for the X chart, where 1s {A} is the indicator function of an event A for a simulation run s.
Otherwise stated, we represent the occurrence of a signal for the X charts at review period p as
the event PX :
nrs
1
1s {∃p|PX } (14)
nrs s=1
18
where:
1s {∃p|PR } = 1s {∃p|(|Yp − Yp−1 | < LT LαRp ) ∨ (|Yp − Yp−1 | > U T LαRp )} (16)
Area under the curve. The probability of overreactions and the detection performance can both be
used as approximations of the effectiveness of the proposed project control procedures. However,
in the results section 5, the need for a more holistic representation of the effectiveness will be
expressed. We find this in the concept of the calculated area under the curve (AUC) for a receiver
operating characteristic curve (ROC). The ROC representation is widely used in classification
testing and machine learning [54] and can be plotted directly from the probability of overreactions
and the detection performance.
Riemann integration of the ROC curve provides us with a single unitless measure for the area
under the curve that captures the characteristics of both the probability of overreactions and the
detection performance at the same time. The AUC should be more than 0.5, which is the area
under the no-discrimination line. This is the line where the true positive rate equals the false
positive rate, and which can be regarded as the characteristic of a classification action that is
purely based on a randomized process such as a coin toss.
In this section, we present our findings from our large computational experiment. First, we
introduce how the concept of the tolerance level α can be used to set a desired degree of probability
of overreactions and how the detection performance is affected, in section 5.1. Combined, these
two output measures allow us to draw a ROC curve and to calculate the AUC.
Section 5.2 provides the validation for our project progress simulation model according to the
recently postulated lognormal core theory [23, 42]. The efficiency of the described X and R
statistical project control charts is explored in section 5.3 and is compared to a more pragmatic
model of traditional EVM/ES use and how it can be implemented for different control points
throughout the project in section 5.4
When the statistical project control charts are constructed, we use a parameter α as the tolerance
level (see section 2.3). This represents the choice for the first and last αth quantile of the empirical
19
Setting tolerance level Setting tolerance level
for a X chart for a R chart
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
Probability of overreactions Probability of overreactions
Detection performance Detection performance
0.0
0.0
0% 2% 4% 6% 8% 10% 40% 0% 2% 4% 6% 8% 10% 40%
α α
Figure 6: Setting the tolerance level α for a X chart, Figure 7: Setting the tolerance level α for a R chart,
for a (δap ,%Aap )=(20%,20%) scenario under the in- for a (δap ,%Aap )=(20%,20%) scenario under the in-
dependence assumption (µB = 1, cvB = 0). dependence assumption (µB = 1, cvB = 0).
distribution function constructed in the static phase to form the tolerance limits of the X and R
chart.
Is is clear that α will affect both the probability of overreactions and the detection performance of
the X and R charts. Therefore, an appropriate choice for α should reflect both a project manager’s
aversion to risk and his/her willingness to invest effort in false alarms. Figures 6 and 7 illustrate
the effect α has on the probability of overreactions and the detection performance for one specific
scenario.
It is easily perceived that the real strength of a chart is in the relative improvement of the detection
performance over a certain increase in the probability of overreactions. Optimally, we prefer the
probability of overreactions as close to 0 as possible. This would mean that the project management
team does not need to invest time and effort unnecessarily in drilling down the WBS, to find the
variation at the activity level to be confined within the acceptable margins. Analogously, we prefer
the detection performance as close to 1 as possible. For a certain scenario, this would imply that
the not as planned situation gets detected somewhere throughout the lifetime of the project. This
preference is accurately translated into a calculated AUC that needs to be as close to 1 as possible.
The ROC curves and the corresponding AUC values for the X chart and R chart from figures 6
and 7 are depicted in figure 8.
In order to validate the project progress simulation model (presented in section 4.2.2), we anal-
ysed the simulation output according to the lognormal core theory [23, 42]. Figures 9 and 10
depict boxplots for respectively the calculated mean and standard deviations of our simulation
output.
Figure 9 presents the mean of the simulated activity durations. This figure clearly shows how the
average activity duration in a project is affected by µB . If B is a random variable (cvB = 0), rather
than a constant (cvB = 0), both the spread and the average of the mean duration is increased.
Figure 10 shows the effect that the coefficient of variation cvB has on the standard deviation of the
simulated activity durations. Both the average and the spread of the standard deviation increase
with increasing cvB . This effect is noticeably larger for the not as planned situations.
20
Area under the curve
1.0
0.8
Detection performance
0.6
0.4
0.2
AUC(X)=0.84
AUC(R)=0.93
0.0
Probability of overreactions
Figure 8: Area under the curve for the X and R chart, for a (δap ,%Aap )=(20%,20%) scenario under the independence
assumption (µB = 1, cvB = 0).
In conclusion, we can safely state that in order to produce project progress simulations with
variation in the orders of magnitude that is discussed by Trietsch et al. [42], with high variation
instances having a standard deviation larger than 2, and large deviations in the mean, linear
association via a lognormal variable B is preferred.
This part of the results section explores the efficiency of the proposed control charts using tolerance
limits on EVM/ES by means of the area under the curve measure introduced in section 4.4.
21
Mean Standard deviation
activity duration activity duration
=2 =2
µB cv B
.5 .8
=1 =0
µB c vB
=1 .3
µB =0
c vB
0 2 4 6 8 10 0 5 10 15
m s
Figure 9: Distribution of the resulting mean in the project Figure 10: Distribution of the resulting standard devia-
simulation model. tions in the project simulation model.
22
Area under the curve
=2
0.99 0.98 0.95 0.91 0.85 0.81 0.78 0.81
cv B
Dynamic Phase
c vB
=0 =0
.3
=0
.8 =2
cv B cv B cv B cv B
X R
Static Phase
Figure 11: Area under the curve for different static/dynamic phase combinations
=2
cv B
Dynamic Phase
.8
=0
cv B
.3 0.93
=0
cv B
=0
cv B Area under the curve
=0 =0
.3
=0
.8 =2
cv B cv B cv B cv B
Legend
Static
tic Phase
1
0.8
0.9
0.6 0.8
%Anp
0.4 0.7
0.6
0.2
0.5
δap
Figure 12: Area under the curve for different not as planned situations
23
Influence of a change in Influence of a change in
mean standard deviation
1.0
1.0
Area under the curve
0.8
0.6
0.6
0.4
0.4
0.2
0.2
X X
R R
0.0
0.0
) ) ) ) ) ) ) ) ) ) ) ) ) ) ) )
0.2 0.4 0.6 0.8 8,1 1.2 1.4 1.6 .4,
-1 -0.6 -0.2 ,0.2 ,0.6 .6,1 ,1.4 ,1.8
[0, [0.2, [0.4, [0.6, [0. [1, [1.2, [1.4, [-1
, ,
[-1 [-0.6 [-0.
2 0.2
[ [0 [1 [1.4
∆m ∆s
∆s ∈ [-0.1,0.1] ∆m ∈ [-0.1,0.1]
Figure 13: The effect a change in the mean has on Figure 14: The effect a change in the standard de-
the AUC. viation has on the AUC.
In this section we revisit the discussion on the comparison of the use of project control charts
using statistical tolerance limits (STL) to the more traditional use of EVM/ES metrics and some
recent advances from literature for project schedule control. Although practical use of EVM/ES
24
is typically characterized by on the spot decision-making from practical experience, we implement
four techniques to compare our statistical control charts with.
• Rules-of-thumb (RoT): This project schedule control approach combines the use of
project control charts with rules-of-thumb. Instead of our statical tolerance limits for the X
chart, these charts employ either static, widening or narrowing tolerance limits for schedule
control purposes. The tolerance limits, represented by a + b p, are implemented as symmet-
rical lines around a percentage complete axis. Therefore, the median of each performance
metric Q̂(0.5)p calculated in the static phase, is subtracted for all percentage project com-
plete p.
– Static tolerance limits mimic the “Target Performance Chart” [56]. The slope a equals
0, for these tolerance limits
– Widening tolerance limits should be used in combination with SV and SV(t). These
variance-based performance metrics grow larger in absolute figures during project progress
and therefore, an increasing upper tolerance limit should be used. Since the lower tol-
erance limit is symmetrical, overall the tolerance limits can be said to be widening
towards the end of the project.
– Narrowing tolerance limits should be used in combination with SPI and SPI(t). Simple
simulations indicate that the variation for these index-based performance metrics de-
creases along the percentage complete axis. Consequently, narrowing tolerance limits
are preferred during project progress.
Without loss of generality, we assume that these control strategies represent a best-case
decision making process from practical experience, since they are calibrated to the static
phase simulation data, much as the X and R control charts. This calibration is done for
the slope a and the intercept b, which are optimized using a coarse search algorithm over a
large solution space, in order to result in probabilities of overreactions that are competitive
with those chosen for the X and R chart. In practical decision-making from experience, this
calibration is impossible and the real-life probabilities of overreactions can be expected to
be much higher. We therefore consider this RoT control strategy to be best-case EVM/ES
decision making without the use of statistical tolerance limits.
• EVM/ES on multiple control points: The use of EVM/ES control metrics applied at
different (and multiple) control points in the project schedule has recently been discussed by
Colin and Vanhoucke [57]. Inspired by the concepts of the critical chain/buffer management
(CC/BM) methodology, different control points throughout the project are suggested at
which EVM/ES control charts should be monitored. The two alternative approaches are
discussed in the original paper and can be briefly summarized as follows
– Feeding paths (FP): The concept of feeding paths is adopted from the CC/BM litera-
ture. EVM/ES schedule control metrics are calculated for those paths in the project
network where a buffer should be place according to CC/BM in order to mitigate risks
and to protect the critical path. The schedule control metrics are then monitored on a
control chart against rules-of-thumb tolerance levels.
– Sub-networks (SN): The concept of sub-network control points expands on the concept
of feeding paths. Schedule control metrics are calculated for the collection of paths that
enter the critical path at a given point. Again, these EVM/ES metrics are referenced
against tolerance limits on control charts. The tolerance limits are calculated using
sample quantiles, in a manner similar to that presented in section 2.3.
It should be noted that the differnece between the two techniques mainly lies in the number
of control points, and figure 15 only gives a summary overview of the main results to compare
the alternative project control methods.
25
Project schedule control using EVM/ES
1.0
Area under the curve
0.9
0.8
0.7
0.6
0.5
STL RoT FP SN LP
X R c g g 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5
ati in in
Stidenrrow
W Na
Figure 15: Area under the curve comparison of the X and R charts (STL) with other schedule control approaches
using EVM/ES
• Longest path monitoring (LP): Inspired by the fact that the EVM/ES schedule control
metrics are fully reliable for serial networks but have a decreasing performance for more
parallel networks [28], the longest path control approach was proposed by Lipke [58]. In this
method, SPI(t) is referenced against a static tolerance limit on a single project control chart.
The SPI(t) is however not calculated for all activities of the project, but only for those that
lie on the longest path in the project network. This longest path is updated dynamically
during project progress with actual durations for those activities that have already been
finished and the baseline estimate durations for the activities that are not yet started.
Figure 15 depicts the calculated area under the curve as boxplots for the statistical project control
charts proposed in this paper and the alternative schedule control procedures outlined in this sec-
tion. This comparison is based on all the data obtained from the simulations in the static/dynamic
phase grid of figure 11. A detailed discussion on the merits and the distinct characteristics of the
procedures presented in this section falls outside of the scope of this paper. In short, we conclude
that the X chart outperforms the R chart, based on the aggregated AUC over all performed sim-
ulation experiments. Both these charts, using statistical tolerance limits should be preferred over
all other tested schedule control procedures for the following four reasons
• The RoT strategies display a significantly lower AUC. The narrowing tolerance limits and
the widening tolerance limits can however be considered as useful alternatives.
• The FP method is shown for a combination of maximally 10 feeding paths monitored during
project progress. The maximal number of feeding paths can however be much larger, even for
small projects. The x-axis in figure 15 under FP depicts the number of control points that are
used. The first control point is always the critical path, while the others are used to monitor
the performance of the feeding paths. When not all feeding paths are monitored, information
about certain activities is lost. This results in an increased probability of overreactions, which
affects the detection performance and consequently the calculated AUC. Figure 15 shows that
this produces a decreasing schedule control performance for an increasing number of included
feeding paths.
• The SN method shows the opposite behaviour. AUC increases to the optimal unity area
under the curve for 5 sub-network control points included in the schedule control procedure.
26
When its performance is compared to the X chart, it shows to be a valuable alternative.
However, while the X control chart relies on only one control point, the SN approach needs
up to 5 control points to only slightly outperform the X chart. Moreover, the SN approach
also relies on the STL procedure which is the topic of this paper.
• The LP method [58] is found to be outperformed by both the X and the R chart. Using
our definition and measures for schedule control it is not able to produce similar results.
However, this method was developed foremost to produce project duration forecasts, rather
than to detect not as planned project progress.
In this paper, a project control system is presented which is inspired by the well-known statisti-
cal process control charts widely used to monitor manufacturing processes but is fundamentally
different in its use of the progress data to construct tolerance limits. The use of earned value
management performance metrics in a control chart, based on simulated random variation, allows
to discriminate between as planned and not as planned project performance variation. This pa-
per measures the performance of the two proposed charts, using a wide range of fundamentally
different project networks in a large computational experiment. The first chart (X chart) mimics
a Shewhart chart for process control with adaptations to fit the finite nature of a project. The
X chart uses the commonly known schedule performance metrics provided in an earned value
management system. The second chart (R chart) proposes an adjustment on these metrics to
focus on the instantaneous changes in project performance. The assumption is that by performing
differential calculations, the accumulated performance of past activities is filtered and only active
tasks are being monitored.
Overall the X chart shows to be the most promising to be used in practice to monitor deviations
from the baseline schedule. Altogether, both show very comparable performance. The results
of various computational experiments show the relevance and usefulness of this approach in a
project control setting, although all the results are subject to the specific definition of as planned
and not as planned project progress and the characteristics of the simulation model as applied
in this research. In order to assess the performance of the statistical tolerance limit approach
discussed in the paper, the X and R chart’s performances are compared to add-hoc rules-of-
thumb strategies that would typically be used in project control using EVM/ES and some recently
developed EVM/ES extensions. Overall, the X and R chart can defensibly be preferred over all
tested procedures
Obviously, the results obtained in this study should be put in the right perspective. The use
of simulated data often shows some weaknesses and the results should therefore be interpreted
with care. A first assumption in our simulation study lies in the dependence structure of activity
durations during sampling. While the assumption of independence was invalidated in previous re-
search [42, 59], it has nevertheless been used in earlier reports on project control [28, 29]. We also
expanded our project progress simulation model with the concept of linear association [23]. How-
ever, the input for our simulation experiments depends largely on subjective estimates for activity
durations, where in practice calibration to historical data should always be preferred.
A second restriction of our simulated project control study is the strict focus on time performance
of projects. Although EVM/ES takes both a time and cost focus during project performance
management, this paper puts a strong and restricted focus on the time dimension of a project in
progress in order to validate the use of schedule performance metrics in a statistical project control
approach. Activity costs have been assumed to be linearly dependent on the activity duration
time, which is restricted to the use of periodic costs. In the future, our model could be adjusted to
incorporate other situations. Furthermore, other relevant project performance dimensions, such
as quality control or project scope control could not be taken into account in this study. Future
27
research will build on this paper, stretching its scope to incorporate the restrictions and expanding
on the idea of using process control procedures for project control.
Finally, it should be mentioned that a detailed comparison between the statistical project control
approach of this paper and the alternative control approaches available in literature is not within
the scope of this paper. A comparison between the quality of methods such as Critical Path
Method (CPM, [1]), the Programme Evaluation and Review Technique (PERT, [60]), its novel
extension to PERT21 [23], the bottom-up approach using Schedule Risk Analysis [29] and the
Critical Chain Project Management (CCPM, [61, 62]) is therefore considered as an interesting
future research avenue to further improve and optimize the knowledge on project control.
This study should be relevant for both practitioners and academics for the following reasons.
Performance indicators from earned value management/earned schedule have been used widely
by practitioners in controlling projects. However, the lack of guidelines towards reasonable toler-
ance limits to discriminate between acceptable and unacceptable performance variation has often
classified this technique as secondary to the necessary skills of intuition, existing experience and
knowledge a project manager must have. The methodology and control approach proposed in
this paper quantifies the use of tolerance limits. This paper shows that the discriminative power
between as planned and not as planned variation in a project is much better when a customized
control chart is used rather than when manual and intuitive thresholds (represented through the
rules-of-thumb) are used. The on the spot decision-making during project control is thereby as-
sisted. Next to the practical relevance, we also believe that this topic will contribute to new
research challenges where extensions of this approach might lead to an increased discriminative
power and better project control.
References
[1] J. Kelley, M. Walker, Critical path planning and scheduling: An introduction, Mauchly As-
sociates, Ambler, PA, 1959.
[2] J. Kelley, Critical path planning and scheduling: Mathematical basis, Operations Research 9
(1961) 296–320.
[3] D. Malcolm, J. Roseboom, C. Clark, W. Fazar, Application of a technique for a research and
development program evaluation, Operations Research (1959) 646–669.
[4] S. Hartmann, D. Briskorn, A survey of variants and extensions of the resource-constrained
project scheduling problem, European Journal of Operational Research 207 (2010) 1–15.
[6] E. Uyttewaal, Dynamic Scheduling With Microsoft Office Project 2003: The book by and for
professionals, Co-published with International Institute for Learning, Inc., 2005.
[7] M. Vanhoucke, Project Management with Dynamic Scheduling: Baseline Scheduling, Risk
Analysis and Project Control, Vol. XVIII, Springer, 2012.
[8] S. Rozenes, G. Vitner, S. Spraggett, Project control: literature review, Project Management
Journal 37 (2006) 5–14.
[9] S. Rozenes, G. Vitner, S. Spraggett, MPCS: Multidimensional project control system, Inter-
national Journal of Project Management 22 (2004) 109–118.
[10] Q. Fleming, J. Koppelman, Earned value project management. 3rd Edition, Newtown Square,
PA: Project Management Institute, 2005.
28
[11] W. Lipke, O. Zwikael, K. Henderson, F. Anbari, Prediction of project outcome: The appli-
cation of statistical methods to earned value management and earned schedule performance
indexes, International Journal of Project Management 27 (2009) 400–407.
[12] M. Vanhoucke, Measuring Time - Improving Project Performance using Earned Value Man-
agement, Vol. 136 of International Series in Operations Research and Management Science,
Springer, 2010.
[13] D. Hulett, Schedule risk analysis simplified, Project Management Network 10 (1996) 23–30.
[14] Y. Fang, J. Zhang, Performance of control charts for autoregressive conditional heteroscedastic
processes, Journal of Applied Statistics 26 (6) (1999) 701–714.
[15] W. A. Shewhart, Economic control of quality of manufactured product, Vol. 509, ASQ Quality
Press, 1931.
[16] W. Lipke, A study of the normality of earned value management indicators, The Measurable
News 4 (2002) 1,6,7,12–14,16.
[17] W. Lipke, J. Vaughn, Statistical process control meets earned value, CrossTalk: The Journal
of Defense Software Engineering June (2000) 16–20,28–29.
[18] G. T. Bauch, C. A. Chung, A statistical project control tool for engineering managers, Project
Management Journal 32 (2001) 37–44.
[19] Q. Wang, N. Jiang, L. Gou, M. Che, R. Zhang, Practical experiences of cost/schedule measure
through earned value management and statistical process control, Lecture Notes in Computer
Science 3966 (2006) 348–354.
[20] S. S. Leu, Y. C. Lin, Project performance evaluation based on statistical process control
techniques, Journal of Construction Engineering and Management 134 (2008) 813–819.
[21] R. Aliverdi, L. Moslemi Naeni, A. Salehipour, Monitoring project duration and cost in a
construction project by applying statistical quality control charts, International Journal of
Project Management.
[24] M. Vanhoucke, S. Vandevoorde, A simulation and evaluation of earned value metrics to fore-
cast the project duration, Journal of the Operational Research Society 58 (2007) 1361–1374.
[25] G. Burrows, Statistical tolerance limits–what are they, Tech. rep., Knolls Atomic Power Lab.,
Schenectady, NY (1962).
[26] R. Van Slyke, Monte Carlo methods and the PERT problem, Operations Research 11 (1963)
839–860.
[27] T. Williams, A classified bibliography of recent research relating to project risk management,
European Journal of Operational Research 85 (1995) 18–38.
[28] M. Vanhoucke, Using activity sensitivity and network topology information to monitor project
time performance, Omega The International Journal of Management Science 38 (2010) 359–
370.
[29] M. Vanhoucke, On the dynamic use of project performance and schedule risk information
during project tracking, Omega The International Journal of Management Science 39 (2011)
416–426.
29
[30] M. Vanhoucke, Integrated Project Management and Control: First come the theory, then the
practice, Management for Professionals, Springer, 2014.
[31] D. C. Montgomery, W. Woodall, Research issues and and ideas in statistical process control,
Journal of Quality Technology 31 (4) (1999) 376–387.
[32] R. J. Hyndman, Y. Fan, Sample quantiles in statistical packages, The American Statistician
50 (4) (1996) 361–365.
[33] W. Lipke, Statistical methods applied to EVM ...the next frontier, The Measurable News
Winter.
[34] L. Tavares, J. Ferreira, J. Coelho, The risk of delay of a project in terms of the morphology
of its network, European Journal of Operational Research 119 (1999) 510–537.
[35] E. Demeulemeester, M. Vanhoucke, W. Herroelen, Rangen: A random network generator for
activity-on-the-node networks, Journal of Scheduling 6 (2003) 17–38.
[36] M. Vanhoucke, J. Coelho, D. Debels, B. Maenhout, L. Tavares, An evaluation of the adequacy
of project network generators with systematically sampled networks, European Journal of
Operational Research 187 (2008) 511–524.
[37] S. Elmaghraby, On criticality and sensitivity in activity networks, European Journal of Op-
erational Research 127 (2000) 220–238.
[38] S. Wang, G. Huang, An integrated approach for water resources decision making under inter-
active and compound uncertainties, Omega The International Journal of Management Science
44 (2014) 32–40.
[39] T. Williams, What are PERT estimates?, Journal of the Operational Research Society 46
(1995) 1498–1504.
[40] M. E. Kreye, Y. M. Goh, L. B. Newnes, P. Goodwin, Approaches to displaying information to
assist decisions under uncertainty, Omega The International Journal of Management Science
40 (6) (2012) 682–692.
[41] M. E. Kuhl, E. K. Lada, N. M. Steiger, M. A. Wagner, J. R. Wilson, Introduction to mod-
eling and generating probabilistic input processes for simulation, in: S. Henderson, B. Biller,
M. Hsieh, J. Shortle, J. Tew, R. Barton (Eds.), Proceedings of the 2007 Winter Simulation
Conference, New Jersey: Institute of Electrical and Electronics Engineers, 2007, pp. 63–76.
[42] D. Trietsch, L. Mazmanyan, L. Govergyan, K. R. Baker, Modeling activity times by the
Parkinson distribution with a lognormal core: Theory and validation, European Journal of
Operational Research (2012) 386–396.
[43] S. Mohan, M. Gopalakrishnan, H. Balasubramanian, A. Chandrashekar, A lognormal ap-
proximation of activity duration in pert using two time estimates, Journal of the Operational
Research Society 58 (6) (2007) 827–831.
[44] E. Hahn, Mixture densities for project management activity times: A robust approach to
PERT, European Journal of Operational Research 188 (2008) 450–459.
[45] T. Kotiah, N. D. Wallace, Another look at the pert assumptions, Management Science 20 (1)
(1973) 44–49.
[46] C. Ragsdale, The current state of network simulation in project management theory and
practice, Omega The International Journal of Management Science 17 (1) (1989) 21–25.
[47] R. Schonberger, Why projects are “always” late: A rationale based on manual simulation of
a PERT/CPM network, Interfaces 11 (1981) 65–70.
30
[48] D. L. Fisher, D. Saisi, W. M. Goldstein, Stochastic pert networks: Op diagrams, critical paths
and the project completion time, Computers & operations research 12 (5) (1985) 471–482.
[49] B. Dodin, M. Sirvanci, Stochastic networks and the extreme value distribution, Computers
& Operations Research 17 (4) (1990) 397–409.
[50] T. Williams, Practical use of distributions in network analysis, Journal of the Operational
Research Society (1992) 265–270.
[51] F. Acebes, J. Pajares, J. M. Galán, A. López-Paredes, A new approach for project control
under uncertainty. going back to the basics, International Journal of Project Management.
[52] R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for
Statistical Computing, Vienna, Austria (2013).
URL http://www.R-project.org
[55] S. Lim, A joint optimal pricing and order quantity model under parameter uncertainty and
its practical implementation, Omega The International Journal of Management Science 41 (6)
(2013) 998–1007.
[56] F. Anbari, Earned value project management method and extensions., Project Management
Journal 34(4) (2003) 12–23.
[57] J. Colin, M. Vanhoucke, A comparison of the performance of various project control methods
using earned value management systems, Submitted to an international journal.
[58] W. Lipke, Speculations on project duration forecasting, The Measurable News 3 (2012) 3–7.
[59] T. Williams, The contribution of mathematical modelling to the practice of project manage-
ment, IMA Journal of Management Mathematics 14 (1) (2003) 3–30.
[60] W. Fazar, Program evaluation and review technique, The American Statistician 13 (1959) 10.
[61] E. Goldratt, Critical Chain, North River Press, Great Barrington, MA., 1997.
[62] R. C. Ash, P. H. Pittman, Towards holistic project scheduling using critical chain methodology
enhanced with pert buffering, International Journal of Project Organisation and Management
1 (2) (2008) 185–203.
31