Reactive Simulation of Shoring and Excavation Processes Based On Automated Performance Monitoring Maximilian Bügler
Reactive Simulation of Shoring and Excavation Processes Based On Automated Performance Monitoring Maximilian Bügler
Reactive Simulation of Shoring and Excavation Processes Based On Automated Performance Monitoring Maximilian Bügler
Maximilian Bügler
Vollständiger Abdruck der von der Ingenieurfakultät Bau Geo Umwelt der
Technischen Universität München zur Erlangung des akademischen Grades
eines
Doktor-Ingenieurs
genehmigten Dissertation.
Abstract
This Thesis presents novel concepts for reactive simulation of shoring and excavation
processes. In this context methods for construction schedule optimization are combined
with techniques for real-time data acquisition. Furthermore uncertainty in the durations of
activities and the performance factors of involved resources are considered. This Thesis
presents a new schedule generation scheme, two heuristic scheduling algorithms, and a
concept to automatically track excavation processes.
5
Acknowledgements
First and foremost, I would like to express my sincere gratitude to my advisor Prof. Dr.-
Ing. André Borrmann for the continuous support of my research, for his patience, knowl-
edge, and motivation. His guidance helped me throughout the four years of my research
and the writing of this Thesis. I could not have imagined a better advisor for my Ph.D
study.
Besides my advisor, I would like to thank the rest of my Thesis committee: Prof.
Dr.rer.nat Ernst Rank, and Prof. Dr.-Ing. Markus König, for their insightful comments
and encouragement.
My sincere thanks also goes to Prof. Dr. Daniel Straub, Prof. Patricio Antonio Vela,
Prof. Dr.-Ing. Dipl.-Wi.-Ing. Willibald A. Günthner, and Prof. Dr.-Ing. Christoph van
Treeck, who gave me opportunities to collaborate in joint research endeavors.
I thank the Bavarian Research Foundation for funding the FAUST project which I
worked on throughout my study.
Furthermore I thank my fellow colleagues for the stimulating discussions, the hard
work on joint publications, the unforgettable experiences we had during conference trav-
els, and the amazing four years we have spent together.
Last but not the least, I would like to thank my family: my parents and my brother for
supporting me throughout writing this Thesis.
Contents 7
Contents
1 Introduction 11
1.1 Problem Statement and Scope . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Methodological Approach . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
10 References 171
1 Introduction
Unexpected delays on construction sites, caused by internal and external influences, create
huge costs every year and account for up to 30 percent of the overall costs of construction
projects (Koehn et al., 1978). Common problems include coordination of different in-
volved companies, weather, traffic, archaeological discoveries, leftover ammunition from
wars, changes in the client’s wishes, scarce resources, and neighborhood repugnance (As-
saf & Al-Hejji, 2006).
Due to the immense complexity of large scale construction sites, construction sched-
ules are highly sensitive to any changes in the sequence of activities to be performed.
Those detailed schedules often get rendered invalid after a short time and have to be up-
dated, or new ones have to be created, taking newly arisen circumstances into account. In
order to improve this error-prone process, ways to accompany construction using a sim-
ulation model and optimization techniques are investigated in this thesis to support the
maintenance of the plans. Compared to a-priori simulation this approach requires data
to be recorded on-site, to be able to determine the progress of the individual activities.
Eventually this will cause the circumstances in the simulation model to adapt to the actual
state of the project. This thesis aims to describe the development of the required methods
and tools for the case of urban excavation and shoring projects with strong spatial con-
straints. This includes methods to extract the required data from the construction site in
an predominantly automated way, determine the progress of the individual activities and
make predictions on the further progress. When data in the simulation model is updated
it has to be determined whether re-planning would be worth-while. Since the re-planning
process is very time intensive for a human to perform, optimization algorithms were de-
veloped and analyzed in order to automate this process. Furthermore the uncertainty that
is naturally attached to construction processes needs to be taken into account. The con-
cept of the resulting simulation-optimization cycle is illustrated in Figure 1.1.
12 1 Introduction
Figure 1.1: Flow chart of the optimization-simulation cycle. The process starts with an
initial plan that undergoes a simulation and optimization process. During the execution of
the plan, data is acquired about the progress. Unless the plan execution is done, the per-
formance factors can require updating, which causes another simulation and optimization
run, updating the plans respectively.
rows connecting the activities represent precedence relationships. For instance activity A
needs to be executed prior to activity B. The round rectangles represent resources, while
the finely dashed arrows imply resource usages. A number next to the arrows indicates
the number of resources of that kind required. If no number is given, this defaults to one.
In order to make representations of simple examples more lean, the colors and hatching
of the activity rectangles also indicate the resource used. In this description we distin-
guish materials, which do not remain available after use, from resources, which can be
used multiple times. Materials are illustrated using white hexagons, connected by dashed
arrows. Similarly to the resources, numbers next to arrows, represent amounts. Finally
the overlapping parallelograms indicate geometric resources. The overlaps indicate which
areas or volumes overlap. A feasible solution to this kind of problem can be illustrated
as a Gantt-chart showing the start and end times of the individual activities (Clark et al.,
1922).
1.2 Hypotheses
This thesis discusses five main hypotheses:
• The uncertainty in construction operations can be modeled and handled by the pre-
sented methods.
• Heuristic optimization approaches can be used to create schedules taking the ac-
quired data into account and minimizing the changes due to further delays.
• The new pseudo-serial schedule generation scheme offers a good basis for meta-
heuristic algorithms to solve RCPSPs.
1.4 Structure
This thesis is organized into nine Chapters. The first Chapter defines the problem and
scope, presents the key hypotheses, as well as the methodological approach.
The second Chapter introduces the main elements of construction project models, the
concept of construction simulation, including discrete-event simulation, different repre-
sentations, as well as possible data sources. Additionally it discusses the issue of choosing
the appropriate level of granularity and the associated trade-offs. Eventually the advan-
tages and limitations of the state-of-the-art of construction simulation are discussed.
1.5 Terminology
This Section briefly describes important terms used throughout this thesis.
Activity: A process that requires a certain number of resources and specific predecessor
activities to be finished, in order to be executed. It either takes a predefined time to
be finished, or the duration can be uncertain according to certain underlying probability
distributions.
Discrete-Event Simulation (DES): A method for simulating systems, where the state
of the system only changes at discrete points in time, where events happen, such as the
completion of an activity.
Float Time: The amount of time an activity, or a set of activities, can be shifted, or
prolonged within a schedule, without changing the remaining schedule and the overall
makespan.
Makespan: The time difference between start and end time of a schedule.
Material: A non-renewable entity that can be used by an activity when executed, for
instance, concrete, girders, or bricks.
Prerequisite / Predecessor activity: An activity that has to be finished for another ac-
tivity to be executed.
Project: A collection of activities, required and available resources, and constraints that
need to be satisfied for each activities execution.
1.5 Terminology 17
Resource: A renewable entity that can be used by an activity when executed, but af-
terwards becomes available to other activities, for instance, machines, workforce, or geo-
metric areas or volumes.
Schedule: A plan for the execution of a project, assigning start and end times, as well as
resources to each activity. Metrics for further analysis can be extracted from a schedule.
18 2 State-of-the-Art in Modeling and Simulation of Construction Operations
The planning process of large construction projects involves large numbers of activ-
ities which are subject to numerous constraints and require different resources and ge-
ometric spaces. The process of analyzing, planning, scheduling, and estimating costs,
taking all factors into account is very tedious and error-prone when performed manually.
Simulation methods are efficient tools to help the involved personnel to reduce errors.
In order to simulate a real system, it has to be represented by an abstract model, that
serves as input for a simulation system. Several methods for modeling of construction
operations have been published in literature, but since computing power has rapidly in-
creased within the last decades, more and more new methods, as well as new data sources
have become available, making construction simulation increasingly interesting for the ar-
chitecture, engineering, and construction (AEC) industry. This Chapter gives an overview
about the evolution of modeling techniques and their feasibility for construction opera-
tions as well as for whole construction projects. As costs and uncertainty play an impor-
tant role in construction planning, methods for representing those factors are discussed.
Another elementary question when modeling a system is the required granularity of the
model and the desired quality of the simulation results. Also this aspect will be taken into
account.
The structure of this Chapter is illustrated in Figure 2.1. On the basis of a real system,
in this case, the construction sites in focus, simulation models are developed in order to
resemble the processes that take place during the execution of the respective construc-
tion projects. The Chapter will introduce process models which describe the nature of
repetitive processes in different ways. These models can be used to acquire metrics about
different aspects of the processes. The different process modeling approaches are de-
scribed in detail in Section 2.5. As an input metric for those process models the efficiency
of the involved resources have to be known. The efficiency of the resources is described
2.1 Performance Factors 19
in terms of performance factors, which are discussed in Section 2.1. The performance
factors form the link between the work to be performed and the time required to do so.
Once the processes involved in a construction project are known, a model of the entire
construction project can be built up using different representations. This process is dis-
cussed in Section 2.6. The process of simulation of the resulting model is described in
Section 2.6.4. The results of the simulation can then be used to improve the execution
plans, as described in detail in Chapters 3 and 4, and in that way feedback into the real
system. Furthermore the performance factors can be dynamically updated using on-site
monitoring systems, as described in Chapters 5 and 6.
Figure 2.1: Flowchart of the reactive modeling and simulation cycle. The real system is
modeled as a collection of process models, which take performance factors of involved
resources into account. The performance factors are estimated and updated by monitoring
the real system. The real system however is subject to uncertainty which in turn affects
the resulting performance factors. The process models are combined into a project model
(See Section 2.6), which is then simulated to generate schedules for the real system.
Once a model has been created it can be used to simulate the system using different
simulation methods. For the case of construction simulation, this Chapter will focus on
methods in the class of discrete-event simulation, which can be used to test different
scenarios or hypotheses, as well as create prognoses for costs and duration of the project.
of work performed over time. Pocock et al. (1996) also describes additional performance
indicators such as cost performance, schedule performance, number of contract modifi-
cations and safety information. Atkinson (1999) referred to the factors time, cost, and
quality as The Iron Triangle. For the scope of this thesis, the productivity of machines
and workers is the only type of performance factors in focus, as it unites time and costs.
In case of shoring projects the performance of excavation and shoring operations are vi-
tal (Park et al., 2005). The performance of excavation operations τe can be measured in
volume excavated over time as described in Equation 2.1. Another example where such
performance factors also play an important role is tunnel construction (Grima et al., 2000;
Špačková & Straub, 2013; Yagiz et al., 2009).
∆v vt − vt1
τe = = 2 (2.1)
∆t t2 − t1
Navon (2005) followed a more general approach and analyzed the performance of
construction projects based on four criteria. The positions of earth moving equipment
and personnel was tracked. The movement of tower cranes was recorded and orders and
deliveries of materials were tracked. Additionally the status of guard rails was monitored
to ensure safety of the site. Generally performance was measured as the amount of work
performed in a certain time. A downside of Navon’s approach is that the measurements
were very coarse and no absolute quantitative measurements of earthwork processes were
taken. Navon (2007) presents an approach to obtain more detailed measures of excavation
processes. The model still lacks absolute measurements of the excavated volume, such as
those presented in Chapter 6. Furthermore performance factors are almost always subject
to uncertainty (Baloi & Price, 2003; Salem & Zimmer, 2005; Szczesny et al., 2013).
presented such an approach based on the use of the fuzzy sets possibility theory by Zadeh
(1978) in the work on project scheduling by Masmoudi & Haït (2013).
The following subsections describe the four commonly used probability distributions,
their characteristics, and their suitability for representing uncertain activity durations and
performance factors.
MacCrimmon & Ryavec (1964) described that probability distributions used for construc-
tion operations should be continuous and limited between two extremes. Furthermore
AbouRizk & Halpin (1992) showed that common earthmoving operations can be de-
scribed by a beta distribution, which also satisfies the aforementioned criteria. AbouRizk
et al. (1991) studied this matter deeply and developed the Visual Interactive Beta Estima-
tion System (VIBES). VIBES uses a combination of four statistical characteristics of the
process durations to be modeled in order to approximate a beta distribution for processes
where no detailed sample data is available. Two of the input characteristics have to be the
extreme points, the minimum and maximum durations that are possible to happen, while
the other two, have to be two of the following characteristics: mode, mean, variance, or
selected percentiles. VIBES then returns a parameter set for defining a beta distribution
that approximates the provided data. These parameters are the beta distribution’s shape
parameters α and β, and the extreme values. If detailed sample data is available, the
parameters can also be estimated using other methods, such as Maximum Likelihood Es-
timates (MLE) (AbouRizk et al., 1994). The definition of the probability density function
is given in Equation 2.2. The mean of the distribution is defined in terms of α and β in
Equation 2.3.
xα−1 (1 − x)β−1
f (x | α, β) = R 1 (2.2)
0
uα−1 (1 − u)β−1 du
α
µ= (2.3)
α+β
As can be seen in Figure 2.2, manipulating the shape parameters causes the shape
to change in different ways. For all cases an example process with a minimum duration
of 2 minutes and a maximum of 6 minutes was chosen. Increasing the α value causes
the mean to shift towards the maximum possible duration, while increasing the β value
causes the mean to shift towards the minimum possible duration. Fente et al. (1999)
showed that those properties of the beta-distribution are ideal for modeling the uncertain
durations of construction operations, and due to the natural limitations in such processes
both parameters must be positive and larger than one.
Golenko-Ginzburg & Gonik (1997) additionally consider normal and uniform random
variables as activity durations. As the normal distribution is not limited between two
22 2 State-of-the-Art in Modeling and Simulation of Construction Operations
0.8 0.8
Likelihood
Likelihood
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 2 4 6 8 0 2 4 6 8
Duration (minutes) Duration (minutes)
Figure 2.2: Shape change of beta distribution probability density function when manipu-
lating the shape parameters α (left) and β (right).
extremes it needs to be truncated in order to prevent the possibility of too high, too low, or
even negative activity durations. Figure 2.3 shows the effect of the normal distribution’s
parameters mean µ and standard deviation σ on the probability density function given
in Equation 2.4, when being truncated. Same as in the previous example, the Figure
shows activity durations limited between two and six minutes, while shifting the mean or
changing the standard deviation σ. With a sufficiently large value for σ, the results will
get closer to a uniform distribution where each value with in the bounds is as likely as any
other. In order to estimate the parameters σ and µ of the normal distribution, Equations
2.5 and 2.6 can be used.
1 (x−µ)2
f (x | µ, σ) = √ e− 2σ2 (2.4)
σ 2π
N
1 X
µ= xi (2.5)
N i=1
2.2 Modeling Uncertainty in Construction Operations 23
v
u
u1 X N
σ=t (xi − µ)2 (2.6)
N i=1
0.4
0.6
Likelihood
Likelihood
0.3
0.4
0.2
0.2
0.1
0 0
0 2 4 6 8 0 2 4 6 8
Duration (minutes) Duration (minutes)
Figure 2.3: Change of truncated normal distribution probability density function when
manipulating the parameters µ (left) and σ (right).
Klein & Baris (1991) propose the use the Gamma and Weibull distributions in order to
model service times, which is a very similar use case than construction activity durations.
The exponential and the χ2 distributions are special cases of the gamma distribution and
therefore do not need special treatment. The gamma distribution is defined by the prob-
ability density function defined in Equation 2.7 and uses the gamma function as defined
in Equation 2.8. The gamma distribution is only defined for positive values of x, but
there is no upper bound. Therefore it is truncated in order to represent an upper limit on
the activity duration. The shape parameter k determines the skewness of the distribution,
while the scale parameter θ scales the distribution along the x parameter. This behavior is
illustrated in Figure 2.4. The parameters of the gamma distribution can be estimated by
making use of the fact that the mean µ is defined as the product of the two paramters k
and θ as is shown in Equation 2.9.
24 2 State-of-the-Art in Modeling and Simulation of Construction Operations
x
xk−1 e− θ
(
θk Γ(k)
x ≥ 0 and k, θ > 0,
f (x | k, θ) = (2.7)
0 otherwise
Where:
Z ∞
Γ(t) = xt−1 e−x dx (2.8)
0
µ = kθ (2.9)
0.8 0.4
Likelihood
Likelihood
0.6 0.3
0.4 0.2
0.2 0.1
0 0
0 2 4 6 8 0 2 4 6 8
Duration (minutes) Duration (minutes)
Figure 2.4: Change of truncated gamma distribution probability density function when
manipulating the parameters k (left) and θ (right).
Similar to the gamma distribution the Weibull distribution (Weibull, 1951) is defined by a
shape parameter k and a scale parameter λ. Both parameters a positive real numbers. A
value for k < 1 indicates that the probability decreases over time, while k = 1 implies
that the probability of remains constant, and k > 1 implies that the probability increases
2.2 Modeling Uncertainty in Construction Operations 25
over time. In terms of construction activities this means, that an activity is very likely
to be finished in minimum time, is likely to be finished within a certain time window, or
is likely to take close to the maximum time, but possibly finish earlier. The mean of the
Weibull distribution is defined in terms of λ and k in Equation 2.11 and makes use the
gamma function defined in Equation 2.8.
(
k x k−1 −(x/λ)k
λ λ
e x ≥ 0,
f (x | λ, k) = (2.10)
0 otherwise
1
µ = λΓ(1 + ) (2.11)
k
Likelihood
0.8
1
0.6
0.5 0.4
0.2
0 0
0 2 4 6 8 0 2 4 6 8
Duration (minutes) Duration (minutes)
Figure 2.5: Change of truncated Weibull distribution probability density function when
manipulating the parameters k (left) and λ (right).
been proposed by AbouRizk et al. (1991). Most commonly, either the χ2 -Test (Plackett,
1983; Yates, 1934) or the Kolmogorov-Smirnov-Test (Smirnov, 1948) are used in order
to verify whether a distribution fits the given data. For illustration purposes sample data
has been generated for each of the presented probability distributions. Figure 2.6 shows
10, 000 samples of each of the distributions.
Value
0.5 0.5
0 0
0 5000 10000 0 5000 10000
Sample # Sample #
Value
0.5 0.5
0 0
0 5000 10000 0 5000 10000
Sample # Sample #
Figure 2.6: Sample data of four probability distributions. Beta distribution α=10, β=3
(Top left), Normal distribution µ=0.5, σ=0.1 (Top right), Gamma distribution k=7, θ=1
(Bottom left), Weibull distribution k=5, λ=1 (Bottom right). The plots show 10, 000
samples per distribution.
In order to show which distribution matches the given data, all samples were normal-
ized to the range of 0 to 1, such that all distributions are feasible candidates for all sample
sets. Then the χ2 -Test, as well as the Kolmogorov-Smirnov-Test have been applied to
test every sample set against every candidate distribution. Each test returns whether the
hypothesis, that the data stems from the tested distribution, is accepted or rejected, based
on a p-value, which is the probability that the data, or more extreme (unlikely) versions of
it, are samples of the tested distribution. The parameters of the distributions, the data was
sampled from, are considered unknown and therefore not used for the tests. Both tests
require a parameter α∗ which is the probability of accepting a false hypothesis.
Tables 2.1 and 2.2 give the p-values resulting from the performed tests. Each column
is labeled with the distribution the data is tested for, and each row is labeled with the
2.2 Modeling Uncertainty in Construction Operations 27
XXX
XXXHypothesis
XXX Beta Normal Gamma Weibull
Actual XXX
Beta 0.6617 0.0000 0.0000 0.0000
Normal 0.0000 0.9970 0.0000 0.0000
Gamma 0.0000 0.0000 0.5964 0.0000
Weibull 0.0000 0.0000 0.4979 0.4977
XXX
XX Hypothesis
XXX Beta Normal Gamma Weibull
Actual XXX
X
Beta 0.9387 0.0000 0.0000 0.0014
Normal 0.3531 0.8613 0.0000 0.0000
Gamma 0.0000 0.0000 0.9720 0.0000
Weibull 0.0061 0.0000 0.9506 0.9632
distribution the data has actually been sampled from. As can be seen, the beta, normal
and gamma distributed samples can be easily identified with very high p-values. The
Weibull data on the other hand, could as well be sampled from a special case of the
gamma distribution as a comparison of Figures 2.4 and 2.5 suggests. While, the chi
square test fails to attribute the data to either distribution, the Kolmogorov-Smirnov-Test
returns positive results for both cases, but gives a higher p-value for the true case. All
tests were performed using an α∗ value of 0.05.
This shows how goodness-of-fit tests can be used to identify a well fitting probability
distribution to describe sample data. The resulting probability distribution parameters
form a vital part of a simulation model including uncertainty.
2.2.6 Monte-Carlo-Simulation
2.2.7 Conclusion
The last Sections provided an overview of the four most commonly used probability dis-
tributions. Literature in the scheduling domain most commonly proposes the use of the
beta distribution for representing uncertainty in process durations. As the beta distri-
bution is bounded, sampled values cannot exceed a minimum and a maximum activity
duration. Furthermore the parameters can be tuned to represent activities that are either
more likely to finish early or late. As the performance factors directly correlate with ac-
tivity durations, this representation is considered equally good for this purpose. Due to
its high suitability, uncertainty is represented using the beta distribution throughout this
thesis. Especially the extension of the Project Scheduling Problem Library PSPLIB, the
UPSPLIB, which is presented in Section 4.5, is built upon the beta distribution. Further-
more the Monte-Carlo-Simulation method forms an important tool for dealing with those
uncertain problems.
2.4 Granularity
shown to be an effective approach (Panchal et al., 2008). Schruben & Yücesan (1993) has
shown that the complexity of simulation models can be determined using graph theoretic
approaches. It was shown that several graph theoretic measures can be used on the event
graph to measure the complexity of a model (Brooks & Tobias, 1996). Furthermore, the
characteristics of the resulting model can be formally analyzed as described in Section
3.4.2.
In order to illustrate the difference between the presented modeling approaches a simple
example of a common procedure in shoring operations, the production of a bored pile wall
has been chosen. A bored pile wall is commonly produced by repeating five sequential
steps. First the drilling rig moves to the position where the next pile should be drilled.
Then the drill is aligned to be exactly horizontal before the drilling process starts. Then
the hole is drilled. After drilling, in most cases a cage of reinforcement steel is lowered
into the hole, which is eventually filled with concrete. Before the hole can be filled with
concrete the drilling rig needs to move away from the hole. For simplification, the re-
inforcement step is not included in this example. The example procedure is depicted in
Figure 2.7. It involves two machines, the drilling rig and the concrete truck. The area
around the hole to be drilled can be blocked by either of the machines. In large con-
struction projects the production of bored pile walls can be parallelized using multiple
machines, implying that the order of the piles to be drilled is not fixed. For overlapping
bored pile walls, a partial drilling order is given. Also drilling next to a freshly drilled pile
is discouraged, as the wet concrete would pour over. For simplification this is not taken
into account in this example.
IBM described the first general purpose systems simulation (GPSS) for computation pur-
poses in the early 1960s (Gordon, 1961). The simulation models are created using a
graphical modeling language. It consists of 26 different block types that can be used
to generate block diagrams describing the system to be studied. There are entering and
removing blocks, such as the "Call" and "Terminate" blocks. The "Call" block enters
transactions from a list of available transactions into the simulation model at predefined
time intervals or whenever the transaction is accepted by the following block. When a
2.5 Modeling and Simulation of Construction Processes 31
Figure 2.7: Bored pile wall example illustration. The drilling rig is positioned above the
drill location (a), then the rig adjusts to be exactly upright (b), the hole is drilled (c), and
then filled with concrete (d). This illustration was inspired by https://de.wikipedia.org/
wiki/Datei:Foundation_pile_scheme.svg
transactions enters a terminate block it is removed from the simulation. The "Advance"
block represents a simple activity that is executed whenever a transactions enters into it
and takes a certain amount of time. A "Gate" block either accepts or blocks an incoming
transaction depending on the availability of a resource. A loop with a preceding dummy
"Advance" block creates a loop waiting until a resource becomes available. The number
on the right side of a "Gate" block selects the resource to be queried. A "Seize" block
describes the action of seizing a resource until a "Release" block releases it again. The
number in the right side triangle names the resource to be seized or released. A "Hold"
block is similar to an "Advance" block, but during execution holds on to the resource
named in the right side double triangle. Each block in the diagram is uniquely labeled by
a number.
The bored pile wall example is illustrated as a GPSS block diagram in Figure 2.8.
It consists of 15 blocks. The first block, a "Call" block, contains a list of piles to be
produced. Then a "Gate" block causes the system to wait for the drilling rig to be ready,
then seizing the drilling rig in the "Seize" block 3. The "Advance" and "Gate" blocks
4 and 5 wait for the area around the location to be drilled to be free, the "Seize" block
6, then seizes the location, after which the drilling rig moves to the location using the
"Advance" block 7. Block 8 adjusts the drilling rig and block 9 drills the hole. After that
block 10 releases the rig and blocks 11 and 12 wait for the cement truck. Then the "Hold"
block pours concrete into the hole, while holding the cement truck resource. Finally the
location is released in block 14 and the pile is terminated in block 15.
This way of describing the example delivers a good representation of the system. As
soon as the drilling rig is released in block 10, the "Call" block 1 can enter the next pile
into the simulation and it can be drilled while the previous hole is still being filled with
concrete. Hence it allows for parallel execution of the involved processes. Since GPSS is
a general purpose model it has a very high level of abstraction making it difficult to create
valid representations for non-experts. Furthermore it becomes much more complex when
more flexibility is required, such as multiple drilling rigs being available.
32 2 State-of-the-Art in Modeling and Simulation of Construction Operations
Figure 2.8: Bored pile wall example as a block diagram in the notation for the General
Purpose Systems Simulation (GPSS). The call element on the top left outputs all the piles
that need to be produced one by one. following the arrows, the pile is then produced and
is finished once the pile instance reaches node 15, which terminates the production of that
pile.
Figure 2.9 shows the GPSS model extended by an additional drilling rig. For this
purpose 3 more block types are used. The "Branch" block allows to send resources along
either of the outgoing arrows. It allows statistical selection among the possible choices, or
using the first outgoing arrow that is not blocked. The "Tag" block allows to tag a resource
with a number, and the "Transfer" block redirects the resource to an output according to
a previously assigned tag. The updated diagram adds a "Branch" block number 2, that
selects either drilling rig 1 or 2, then seizing the respective rig in blocks 4 or 7. Further-
more the current pile is tagged with the assigned drilling rig. When the drilling rig is to
be released the assigned tag is used in the "Transfer" block 15, to release the respective
drilling rig. As can be seen in this example, adding additional resources to an existing
diagram is troublesome and requires tedious work. Adding multiple cement trucks would
add even further complexity. When multiple resources, with multiple instances are to be
used at the same time, the concept of tagging resources quickly becomes very confusing
and error prone. Understanding such a model as a non-expert can be very difficult. For
2.5 Modeling and Simulation of Construction Processes 33
Figure 2.9: Bored Pile Wall Example as a block diagram in the notation for the General
Purpose Systems Simulation (GPSS) with two drilling rigs. The call element on top out-
puts each pile that needs to be produced one by one. In node 2 a drilling rig is selected for
each pile, after which the respective rig is used for production, creating the two parallel
streams for seizure of the rig. Once the drilling is seized, the streams merge again, pro-
ducing the pile, and eventually releasing the seized rig. Eventually each pile’s production
terminates in node 22.
34 2 State-of-the-Art in Modeling and Simulation of Construction Operations
The Activity Cycle Diagram (ACD) was introduced by Tocher (1963) to model interac-
tions of system objects. It consists of two states, idle and active, that can be used to de-
scribe simulation entities. different arrows among the entities describe the state changes
of resources. Hence, the arrows indicate the flow of the resources between active and idle
states and therefore indicating the assignment of resources to the individual processes. In
order for a state to become active, all incoming idle states must be holding the required
resources.
Figure 2.10: Bored pile wall example as an Activity Cycle Diagram (ACD). This diagram
shows the flow of the resourced as differently colored arrows. First a hole is selected in
the "next hole" idle state, going into the "DRILL" active state, requiring the drilling rig
to be in the idle state "ready". Once drilling is done, the drilling rig can move to the idle
state "done 1", while the drilled hole can move to the idle state "done 3". Then the drilling
rig enters the "MOVE" active state, freeing the drilled hole, which goes into the "done 4"
state. The drilling rig moves into the "done 2" state, being ready to be adjusted for the
next hole. The drilled hole, can now be filled with concrete, hence enter the "POUR
CONCRETE" active state, given the cement truck is not currently filling another hole,
hence is in the "wait" state. Once the hole is filled, the hole resource can proceed to the
next hole.
For the bored pile wall example, depicted in Figure 2.10, three types of arrows were
introduced, forming three individual cycles for each of the involved resources, the drilling
rig cycle using red solid arrows, the cement truck cycle using green dashed arrows, and
the drill hole cycle using blue dotted arrows. The drilling rig moves along the cycle of
2.5 Modeling and Simulation of Construction Processes 35
the active states "MOVE", "ADJUST", "DRILL". The cement truck moves along the
cycle "MOVE", "DRILL", and the drill hole cycles between the active states "DRILL",
"MOVE", and "POUR CONCRETE". When the drilling rig is in the idle state "done 1"
following the active state "DRILL", it can commence to move to the next drilling position.
After having moved it continues to the idle state "done 2", after which it directly goes into
the active state "ADJUST". After that it is ready for drilling the next hole. Before the
system can enter the active state "DRILL", the idle state "next hole" needs to hold the
"Drill hole" resource, which is only freed after the concrete has been poured. Once the
"DRILL" state has been finished, both the drilling rig and the drill hole resource are freed
and move to the respective idle states "done 1" and "done 3". After that the active state
"MOVE" can be entered, for the drilling rig to move on to the next location. As the cement
truck must be in the idle state "wait" and the drill hole resource moved to the idle state
"done 4" after the drilling rig moved, the active state "POUR CONCRETE" can now be
entered. Once the concrete has been poured, the drill hole resource moves to the "next
hole" state and the cement truck moves into the "wait" state. At this point the cycle can
start over.
As can be noticed this diagram forces the pouring of the concrete to be finished prior
to the next hole to be drilled, which would practically be possible in parallel to the next
drilling process. This is due to the artificial drill hole resource, which is required to
synchronize the concrete pouring cycle and the drilling rig cycle. This shows that the
ACD notation is only feasible for strictly cyclical problems, such as excavation activities,
but in this case does not allow sufficient synchronization of the concrete and drilling cycle.
The first modeling approach that has been widely applied to construction processes was
published by Halpin (1977) and is called CYCLONE (CYCLic Operations NEtwork).
Compared to the previously described Activity Cycle Diagram (ACD), CYCLONE intro-
duces several new elements that allow the modeling of more complex systems. Compared
to the normal active state, CYCLONE distinguishes between normal and combi elements,
as well as queue and function elements. Furthermore a formal counter element is intro-
duced for statistical evaluation purposes. A normal state has a single incoming arrow
from an arbitrary other element, that when handing over a resource immediately allows
entering that state. A combi state, on the other hand can have multiple incoming arrows,
that all need to hand over a resource in order for the state to be entered. A queue element
collects incoming resources and supplies them through the outgoing arrows. A function
element spawns new resources to be supplied through outgoing arrows.
The bored pile wall example as a CYCLONE diagram is depicted in Figure 2.11.
Beginning in the state "MOVE", the drill rig can immediately enter the state "ADJUST"
after moving to the appropriate position. After adjusting, it will drill a new hole, which is
supplied through the function element "holes 1". After the drilling process is completed,
the drilling rig can move on to the next state, hence the state "MOVE" becomes active. At
the same time the drilled hole is handed over to the queue element "holes 2". Given that
the concrete truck is being held in the queue element "truck", the "POUR CONCRETE"
combi state can be activated. Once the concrete has been poured, the hole is handed to the
36 2 State-of-the-Art in Modeling and Simulation of Construction Operations
Figure 2.11: Bored pile wall example as a CYCLONE diagram. The colors of the arrows
indicate the resource cycle, same as in Figure 2.10. The function element "holes 1" out-
puts the piles that need to be produced. the combi-state "DRILL" is then executed given
the drilling rig is available. Once the drilling is finished the drilling rig moves away in the
"MOVE" active state and adjusts in the "ADJUST" active state. The hole goes into the
"holes 2" queue element, where the holes queue up, waiting for the cement truck. The ce-
ment truck moves to the "truck" queue element when finished and then becomes available
for the combi-state "POUR CONCRETE". Once a hole is filled with concrete, it moves
to a counter element and proceeds to the "piles done" queue element.
counter element, counting the number of finished piles, then it ends in the queue element
"piles done" and the truck is fed back to the queue element "truck". At the same time
the drilling rig can have drilled new holes and can be filling up the queue element "holes
2". Due to the queuing and function element functionality of CYCLONE, the bored pile
wall example cycle can be adequately modeled in this diagram, while the drilling and
concrete cycles being synchronized through the "holes 2" queuing element. If the drilling
cycle advances faster than the concreting cycle, the "holes 2" queuing element will collect
drilled holes, ready to be filled with concrete, while in the opposite case, the concrete cycle
will idle until a new hole is ready to be filled. CYCLONE is still used frequently due to its
simplicity, while still being capable of representing complex scenarios (Huang & Hsieh,
2012). A system called PROject SImulation Dragados Y Construcciones (PROSIDYC),
which is based on CYCLONE has found its way into real life use, where it was able to
facilitate significant improvements in the project execution (Halpin & Martinez, 1999).
2.5 Modeling and Simulation of Construction Processes 37
A decision, project managers are facing regularly is the selection of resources for different
processes. A machine might be able to perform an operation more efficiently, but at the
same time create more costs, or as a downside block a larger area on the construction
site. For this purpose CYCLONE was extended by a forking element, resulting in the
STate and ResOurce Based Simulation of COnstruction ProcEsses (STROBOSCOPE)
(Martinez & Ioannou, 1994). This extension allows multiple different ways of reaching
a certain goal. The fork element allows the resource flow to leave through any one of
the outgoing arrows. This can mean different orders of executing sub-processes or using
different resources for a process, resulting in different performance factors.
Figure 2.12: Bored pile wall example as a multi-mode STROBOSCOPE process diagram.
Compared to the CYCLONE diagram in Figure 2.11, the diagram was extended by the
fork element, offering a choice between moving to two different piles to be drilled next.
For instance, the next one to the left or to the right.
Figure 2.12 shows a STROBOSCOPE process diagram, where the resource flow of the
drilling rig encounters a fork element after drilling, offering two choices of which way to
go to the next hole. In a larger context the fork element can also be used to switch between
multiple resource flows, assigning different processes to different machines. Though it is
very complex to allow the production of the individual piles in an arbitrary order, as all
those piles would need to enter the system at the same time and many fork elements would
need to model the choices. In the example the choice can be viewed as moving to the next
pile that needs to be drilled, either to the left or to the right. If additional piles or machines
would be introduced, the entire model would need to be updated.
38 2 State-of-the-Art in Modeling and Simulation of Construction Operations
Shi (1999) presented the Activity-based construction (ABC) modeling and simulation
method. Compared to the previously described approaches ACD and CYCLONE, ABC
uses an activity-based representation of the construction processes. The representation of
a construction project is reduced to a collection of activities and resources. Each activ-
ity contains certain attributes, including duration of execution, logical dependencies and
resource demands. Compared to ACD and CYCLONE, there is no distinction between
active and idle states as those arise from the constraints, such as resources and precedence
relationships, being met. A process that requires a resource that is unavailable, is natu-
rally forced to be idle. Resources are indicated by colors and arrows indicate precedence
relationships.
Figure 2.13: Bored pile wall example as an activity-based construction diagram. The
diagram only consists of the four activities "DRILL", "MOVE", "ADJUST" and "POUR
CONCRETE". The arrows indicate precedence constraints. The fill color of the rectangles
indicate the required resource. In order to drill, the rig needs to be adjusted, in order to
move to the next location it needs to have finished drilling, and in order to adjust it needs
to have moved to the next location. Before the concrete can be poured, the hole needs to
be drilled and the rig needs to have moved away from the hole.
The bored pile wall example in this notation is illustrated in Figure 2.13. It shows only
four activities in two colors. Green implies that the activity requires the drilling rig to be
available, while the yellow activity requires the concrete truck to be executed. Again,
starting with the "MOVE" activity, the "ADJUST" activity can start directly afterwards.
Immediately after that the "DRILL" activity can commence. After the "DRILL" activity
is finished, the drilling rig needs to move to the next location, hence the "MOVE" activity
needs to be finished, for the "POUR CONCRETE" activity to have satisfied precedence
constraints. Given the concrete truck is available, the activity can be executed. Similar to
CYCLONE, this approach allows for the bored pile wall example to be modeled, allowing
parallelization of the two cycles, while being synchronized through precedence relation-
2.6 Modeling and Simulation of Construction Projects 39
ships. When the drilling rig moves to the first location to be drilled at, only one of the
two precedence constraints of the "POUR CONCRETE" activity is satisfied, only after
the first hole is drilled, and the drilling rig has moved on, the activity becomes executable.
This example shows that ABC offers a much more intuitive way to model construction
operations, that are much more accessible to non-professionals.
Figure 2.14: Bored pile wall example as a non-cyclic sequential process diagram. In
this representation, each rectangle is an activity that will only be executed once, and then
marked as finished. The order of execution needs to respect the precedence relationships
indicated by the arrows and the availability of resources, which are indicated by the fill
color of the activity rectangles. The diagram enforces the order the piles are produced in.
Figure 2.15: Bored pile wall example as a non-cyclic parallel process diagram. Compared
to the diagram in Figure 2.14, this diagram does not enforce the order of pile production
and allows full parallel execution if multiple drilling rigs or concrete trucks are present.
The representation shown in Figure 2.15 allows full parallelization of the individual
drilling and concrete activities, while being embedded in a larger context. First the site
needs to be levelled in preparation, when this is done, drilling rigs can move to the indi-
vidual location where holes need to be drilled. After adjusting and drilling, the rig needs
to "MOVE ON" in order to free the hole for a concrete truck to be able to execute the
"POUR CONCRETE" activity. The additional "MOVE ON" activity is required as it is
not specified where the drilling rig moves after finishing drilling. When all piles are fin-
ished, the pit, now secured by the bored pile wall can be excavated. This representation
allows full parallelization with an arbitrary number of drilling rigs and concrete trucks,
though it does not take into account that the drilling rig, usually blocks an area blocking
not only a single drilling location. The number of parallel paths in the graph is determined
by the number of piles to be produced, that do not depend on each other.
Figure 2.16 additionally includes a simple geometry representation, which on the one
hand does not require the artificial "MOVE ON" activity and on the other hand allows the
2.6 Modeling and Simulation of Construction Projects 41
Figure 2.16: Bored pile wall example as a non-cyclic parallel process diagram with ge-
ometry. The diagram in Figure 2.15 extended by a representation of geometric space,
indicated by the blue parallelograms. The spaces may be overlapping as the drilling rig
might block multiple holes while positioned above one hole.
In Figure 2.16, the problem is illustrated as a directed acyclic graph, where each node rep-
resents one activity and edges represent precedence relationships. This notation is called
the Activities-on-the-Nodes (AoN) notation. In contrast to this notation, the Activities-on-
the-Arrows (AoA) notation represents activities by edges and the nodes represent states
(Ponz-Tienda et al., 2015). Figure 2.17 shows an AoA representation of the bored pile
wall example. As the AoN representation is more intuitive to be read and both represen-
42 2 State-of-the-Art in Modeling and Simulation of Construction Operations
tations can be transferred from each other (Yang & Wang, 2010), the AoN representation
is used throughout this thesis.
In order to adequately describe a construction project several elements are required, which
need to be acquainted by different attributes. This Section provides an overview of the
different classes of elements and their capabilities.
2.6.2.1 Activities
Activities are actions performed on and around a construction site, contributing to the
overall progress of the project (Zhang et al., 2005b). Each activity is defined by a set of
parameters. The duration specifies how much time it takes to execute the action. Further-
more an activity may require certain resources, such as machines, or materials, such as
concrete, to allow execution. Additionally it may be required that certain other activities
must be finished or in progress, for the activity to be executed. Most activities create
costs by using resources or materials, so these costs are also an important parameter of
an activity. The costs may also be a function over the duration. In order to determine
robustness of schedules, the uncertainty of each activity’s duration is an important mea-
sure (Artigues et al., 2013; Deb & Gupta, 2006). Therefore a probability distribution for
the duration is part of the required parameters and may influence the cost parameter, as a
prolonged activity might increase running costs, like fuel or wages. More precisely those
costs can be modeled on a resource level, such that the activity itself does not create costs,
but the resources and materials that are used do. A milestone can be viewed as a special
kind of activity that does not have costs, duration, or uses any resources, but is dependent
on some other activities to be finished.
2.6.2.2 Resources
Resources describe renewable entities on the construction site that can be temporarily
used by activities. Resources are commonly available on site in limited quantities and
2.6 Modeling and Simulation of Construction Projects 43
therefore form one of the strongest constraints in construction scheduling. The usage of
resources may be accompanied by costs, such as fuel or wear and tear. Unless modeled
on an activity-level, the costs for using, hiring, and transporting resources are parameters
of those resources.
2.6.2.3 Materials
Materials are non-renewable entities on site, that can be consumed by activities. Despite
materials having associated costs, the amount of materials used is usually constant on
a construction site, hence it does not change when activities are performed at different
times. Including costs of materials into the project model still might be relevant as it is
possible to calculate the cost flow during the execution phase, as well as there might be
strict budgets for the different construction phases.
Every activity that is performed on a construction site will require some space to per-
formed on, that will be unavailable for other activities during its execution. Those spaces
can be represented either as two dimensional areas on the site, or as three dimensional
volumes inside a building. As geometric conflicts form a strict limit on the feasibility of
construction schedules, it is important to include geometry into the project model.
Most of the constraints limiting the execution of construction schedules are logical con-
straints, such as a slab requiring underlying columns in order to be produced, or a wall,
that needs to be plastered prior to being painted. This creates start-finish (SF) prece-
dence constraints, as described in Section 3.3.1. Additionally safety considerations might
create further limitations (Melzner et al., 2012) that can for instance prevent concurrent
execution of certain activities. Furthermore soft-constraints can be included, representing
practical aspects of certain activities, such as dirty activities should be finished before
painting the walls (Beißert et al., 2010). A special kind of constraint is the start-start
(SS) precedence constraint which requires another activity to have started execution, but
it does not necessarily have to be finished (Section 3.3.1).
Construction processes can be derived from the product that has to be produced. The
required durations are mostly results of product metrics, such as volume or surface areas.
Those metrics can be extracted from digital representations of the product.
For this purpose Hendrickson et al. (1987) developed a system called CONSTRUC-
TION PLANEX, an expert system for the automatic generation of schedules based on
44 2 State-of-the-Art in Modeling and Simulation of Construction Operations
a database of construction techniques and resource requirements. It formed the first ap-
proach towards the creation of a product model based approach to construction schedul-
ing, taking all the most important factors into account, including precedence relationships,
activity durations, and costs.
Another approach to this problem was published by Navinchandra et al. (1988), in-
troducing a system called GHOST, which started from a very optimistic plan with com-
ponents, connected by few or no precedence constraints, then using a knowledge base to
refine that model by introducing constraints to remove violations. This allows to detect
shortcomings in the available models and develop rules for improving future models.
Winstanley et al. (1993a,b) designed a similar system, called OARPLAN, that linked
activities to components. So instead of assuming that one component is created by one
process, multiple activities can be required for a single component or vice versa. The
information required to do so was extracted from CAD models.
AbouRizk & Mather (2000) presented the idea of simplifying the development of
simulation models for earthmoving operations by an integration with CAD modeling.
Chahrour (2006) generalized this idea and developed an integrated CAD-based simulation
model, that allowed planners to enrich the CAD model by data, required for an integrated
simulation model, during the modeling process. Chahrour & Franz (2006) argue that
this step greatly aids the practical usability of simulation models, as CAD systems are a
commonly known environment among construction planning experts.
Building Information Modeling (BIM) is the process of creating digital representa-
tions of existing buildings, or during the planning phase of new buildings. The models,
besides geometry, should contain physical and functional characteristics of the building.
The concept was first envisioned by Law & Jouaneh (1986) and Björk (1989, 1992).
Popov et al. (2006) presented possibilities to use those digital building models during the
entire process of design, scheduling and construction. Lately this idea gained increasing
popularity in the construction industry (Dossick & Neff, 2010), while even more capabil-
ities of BIMs were discovered, including safety checking (Melzner et al., 2012), progress
monitoring (Bosché et al., 2015; Braun et al., 2015a; Matthews et al., 2015), and produc-
tivity assessment (Poirier et al., 2015).
Tauscher (2011) and Tauscher et al. (2014) took the approach of Chahrour & Franz
(2006) into the BIM age by presenting an approach to automatically extract construc-
tion sequences from building information models. The increase in information available
through the BIM approach allowed to minimize the required human interaction for con-
struction simulation modeling.
Dzeng & Tommelein (2004) developed a case-based system, that tries to identify pre-
viously encountered patterns on the basis of a knowledge-base, and apply solutions that
were executed in the past. The identification is performed on the basis of product mod-
els similar to the approach by Fischer & Aalami (1996)1 . Later the idea was developed
further into a system tailored to work with Building Information Models by Sigalov &
König (2015).
Query languages, such as QL4BIM (Daum & Borrmann, 2014) can use spatial con-
1
Up to this point, this Section was in part inspired by Huhnt & Enge (2006).
2.6 Modeling and Simulation of Construction Projects 45
straints as a filter criterion applied to building information models. The resulting con-
straints are based on geometric relationships, such as a slab resting on a set of columns,
which need to be produced prior to the slab (Braun et al., 2015a). Those kinds constraints
can be used to create a rough precedence graph of a building to be constructed. Figure
2.18 shows an example of a building model with six stories. The generated precedence
graph is displayed next to the building. Each of the nodes represents one activity and the
connections represent precedence relationships. The graph narrows down on each slab, as
the following columns and other elements can only be constructed after the slab has been
produced.
Figure 2.18: Building information model and its automatically derived precedence graph.
Illustration from Braun et al. (2015a).
46 2 State-of-the-Art in Modeling and Simulation of Construction Operations
Marx & König (2011), König et al. (2012), as well as Braun et al. (2015b) showed that
this concept is equally feasible on a larger scale in real-life BIMs. Additionally building
information models can contain information about building materials, as well as building
techniques. This data gives rise to the involved activities, required resources, materials,
durations, and geometric needs.
Once a representation of a construction project has been modeled, it can be used to gen-
erate a feasible schedule. Since such a schedule is not necessarily good or even optimal,
optimization procedures such as those described in Chapters 3 and 4 are highly useful. As
construction operations naturally consist of many individual activities that start and finish
at specific points in time, they can be described as a finite number of discrete time in-
stances, where activity states are changing. The simulation of a system, which is defined
by a state that changes only at discrete time instances is called discrete-event simulation
(DES). This notion was formalized by Fishman & Kiviat (1967).
Since discrete-event simulation models exactly that kind of systems it is an appro-
priate tool to use when simulating construction projects. In order to run a discrete-event
simulation system, the state of the system has to be defined for each time step and the
times, when the state changes need to be determined. In case of the construction simula-
tion model used throughout this thesis the state of the system is defined as the states of
each activity and the availability of each involved resource.
The precise definition of an entire construction project in these terms is described in
Section 2.6.2. Schruben (1983) describes simulation modeling in terms of event graphs
and introduced the notation shown in Figure 2.19. It shows two events j and k connected
by a transition taking time t and requiring condition (i). The event graph based represen-
tation is an activities-on-the-arrows (AoA) representation as described in Section 2.6.1.
However it has been shown that an activities-on-the-nodes (AoN) representation is more
intuitive for the user and can be transformed to an AoA notation (Yang & Wang, 2010).
This means that the events j and k correspond to the start and end times of an activity,
the transition time t is the duration of the activity, and that (i) describes precedence and
resource constraints.
Figure 2.19: Notation of simulation modeling with event graphs (Schruben, 1983). Two
states j and k are connected by an activity that requires time t and constraints (i) to be
executed.
In order to perform a simulation run, the resource constraints need to be set, by set-
ting a counter for each resource to the number of available units. The simulation clock
is initialized at time t0 = 0. Then a list of activities with no precedence constraints is
assembled. From this list, an activity A is selected and scheduled to be executed at time
2.6 Modeling and Simulation of Construction Projects 47
t0 = 0. The required resources are allocated by reducing the respective counters by the
required numbers. A new event is generated at the finish time tf (A) of activity A. The
activity is now considered active. Then the list of executable (precedence and resource
feasible) activities is updated and, if possible, another activity is scheduled at t0 = 0, and
an event is generated for the finish time. If no more executable activities are available,
the simulation clock is advanced to the earliest finish time of all active activities, which
defines the next discrete event. As at this event one or more activities end, the respective
resources are freed and the activity is marked as finished and is no longer active. At this
point a new list of precedence and resource feasible activities is generated, and an activity
is selected for execution. If no more activities are executable the simulation clock is ad-
vanced. This is repeated until all activities have been scheduled. This process is illustrated
in the flowchart shown in Figure 2.20. The node Select feasible activity allows for differ-
ent selection rules that select activities from a set of currently feasible ones. The order in
which the activities are scheduled forms a degree of freedom in scheduling. Based on this
degree of freedom different optimization procedures can be deployed as is described in
Chapters 3 and 4. Beißert et al. (2007) and Beißert (2010) introduced the constraint-based
discrete-event simulation system, where the activity selection is performed randomly and
the results are evaluated using Monte Carlo Methods. Other approaches use different pri-
ority rules, resulting in the parallel schedule generation scheme that is described in detail
in Section 3.4.4.3.
Generally the process of simulation, either as a type of schedule generation scheme, or
as constraint-based discrete-event simulation, forms an important basis for many heuris-
tic algorithms solving resource-constrained project scheduling problems, which are in-
troduced in Section 3.4. Furthermore specific metrics obtained through the simulation
process can be used for certain scheduling procedures, such as the float time calculation
presented in Section 3.4.6.
As described in Chapter 3 in terms of schedule generation schemes, this process is not
necessarily capable to find the optimal schedule as defined in Section 3.4.4.1.
48 2 State-of-the-Art in Modeling and Simulation of Construction Operations
Figure 2.20: Flowchart of the discrete-event simulation process. At time t = 0 the system
is initialized with all resources idling. As long as there are unscheduled activities the main
loop continues. It creates a list of feasible activities, selects a feasible activity, executes
it and allocates the respective resources. Once no more activities are feasible, time is
incremented to the next finishing time, activities are finished and the resources freed.
2.7 Summary 49
2.7 Summary
This Chapter introduced the concept of performance factors, the modeling approaches
to represent uncertainty in terms of different probability distributions, representations of
costs as well as the trade-offs to be made when deciding upon the granularity of process
and project models. The state-of-the-art in cyclic process modeling and non-cyclic project
modeling has been laid out in terms of the required components and the different modeling
approaches. Finally discrete-event-simulation approaches to perform simulations of the
created models have been described.
Simulation is a very capable tool to investigate systems theoretically, based on an
abstract description. While very successful in academic research, construction simulation
has not yet been widely adopted by the architecture, engineering, and construction (AEC)
industry for various reasons, including lack of simplicity, as well as time, costs, and skills
required to set up a usable simulation model (Scherer & Ismail, 2011).
A major limitation is the accessibility of simulation systems to the construction pro-
fessional, who is not necessarily a simulation expert. In the construction context, this
process can be reduced to describing the activities required to complete parts of a project,
or a complete project, their durations, precedence relationships, and resource demands, as
described throughout this Chapter. This makes the simulation domain far more accessi-
ble, as opposed to describing the problem using the General Purpose Systems Simulation
(GPSS), or even programming in general purpose programming languages.
Furthermore advances in the field of building information modeling provide novel data
sources, increasing the usability of simulation systems even further. This approach how-
ever is still subject to different limitations. Due to required abstractions the model may
not adequately reflect the reality on the construction site. It is up to the user to decide upon
the trade-off between granularity and design and simulation effort. Furthermore the in-
put data, such as expected activity durations may be inaccurate and lead to discrepancies.
This can however be addressed by a representation of the uncertainty in the durations,
costs, or underlying performance factors. Overall these limitations and trade-offs need
to be known to the user of construction simulation systems in order to create sufficiently
accurate, but feasible simulation models.
Once a model of a project, its individual processes, as well as a simulation approach
has been chosen, there are different ways to generate schedules for the execution of the
building process. As these schedules are vital for the success of a project it is desired
to generate schedules that are as close as optimal with respect to the desired metrics as
possible. Therefore different optimization approaches have been developed and published
in literature which are presented throughout the next Chapter.
50 3 State-of-the-Art in Construction Schedule Optimization
3 State-of-the-Art in Construction
Schedule Optimization
Figure 3.1: Bored pile wall project example illustration viewed from above. A rectangular
pit is to be excavated. To prevent earth from caving in, two sides of the pit have to be
secured using anchored bored pile walls.
It shows the two bored pile walls, which meet in the upper left of the illustration.
Every second pile will be anchored into the surrounding soil to prevent the piles from
falling over. The illustration shows four distinct excavation areas labeled 1 through 4. In
order for the anchors to be placed, the respective bored pile wall needs to be finished, and
the excavation area has to be partly excavated to the depth the anchors should be placed
and in order to make room for the required machinery. After the anchors have been
placed the remaining excavation of the area can proceed. After all excavation activities
are finished the foundation can be produced. The degrees of freedom of this example
consist of the order the excavation areas are excavated and when the bored pile walls are
produced. Areas 1 and 2 depend on the bored pile walls, while 3 and 4 are independent.
efforts. Therefore it can be more economic to have a machine idle for a certain time, as
opposed to moving it to another site for the time it is not required. The costs for the
availability of machines, but also the stocking of materials should be part of cost oriented
optimization.
Another important aspect is the robustness of a schedule, which describes the limited
impact of the changes in durations of individual activities on the overall makespan. Addi-
tionally leveling the usage towards constant utilization of certain resources can be desired,
forming the Resource Leveling Problem (RLP) (Ponz-Tienda et al., 2013).
Generally, optimization problems in the domain of construction scheduling are in
most cases multi-objective problems, such that for instance resource usage and the overall
makespan are objectives to be minimized.
Sabuncuoglu & Goren (2009) and König (2011b) describe proactive and reactive ap-
proaches to dealing with uncertainty of construction operations. Reactive approaches
imply that the schedule is updated when deviations occur, while proactive approaches try
to anticipate the deviations by modeling uncertainty as described in Section 2.2 and there-
fore maximize the robustness. Since construction processes are highly volatile, robustness
may well be among the most critical measures in construction schedule optimization.
König (2011a) defines a robustness of a construction project schedule as minimiz-
ing the effects of forseeable variations of the durations of construction activities on the
schedule. Herroelen & Leus (2004) define it as incorporating a degree of anticipation of
variability during project execution. They furthermore distinguish quality robustness and
solution robustness. Quality robustness means that deviations in activity durations have
minimal impact on the quality of the schedule, which implies that the changes to the value
of the objective, such as the makespan, are minimal. Solution robustness means that the
changes to the schedule that may be caused by deviations are minimal.
Furthermore Chtourou & Haouari (2008) defines twelve measures of robustness of a
given schedule in terms of the slack si , the number of direct successors Nsucci , the number
of resources of type k used rik , the average percentage increase in activity duration κi and
the slack indicator αi defined in Equation 3.1. All measures are listed in Table 3.1. The
first measure M1 maximizes the sum of all float times, implying that a schedule with
more float time is more desirable. If for instance a single activity with high uncertainty
would have zero float time, this would be considered equally good than an activity with
very low or no uncertainty having zero float time. Also activities with a large number
of follow up activities are more desirable to have larger float times, which is expressed
by M2 . M3 instead takes the resource usages into account by making high float times
for activities with high resource usage more desirable, as resources might be blocked by
previous delayed activities and therefore would cause more potential instability. M4 is
a combination of both previous measures. M5 maximizes the number of non-critical
activities. A lower number of activities with zero float time is preferred. M6 again takes
the number of successors into account, while M7 takes resources into account and M8
combines both aspects. M9 minimizes the total average increase in duration, given it
3.2 Objectives of Optimization and Schedule Metrics 53
is lower than the float time for each activity. M10 , M11 , and M12 expand this idea by
taking successors and resources into account.
(
1 if si > 0
αi = ∀i ∈ J (3.1)
0 otherwise
X X X
M1 = si M7 = (αi rik )
i∈J
X Xi∈J k∈R X
M2 = si Nsucci M8 = (αi Nsucci rik )
i∈J
X X Xi∈J k∈R
M3 = (si rik ) M9 = min(si , κi ti )
i∈J
X k∈R X i∈J
X
M4 = (si Nsucci rik ) M10 = min(si , κi ti ) · Nsucci
i∈J
X k∈R i∈J
X X
M5 = αi M11 = (min(si , κi ti ) rik )
i∈J
X i∈J
X k∈R X
M6 = αi Nsucci M12 = (min(si , κi ti )Nsucci rik )
i∈J i∈J k∈R
Another way of determining the robustness of the resulting schedule, if the probability
distributions of the activities’ durations are known or can be determined as described in
Section 2.2, is Monte-Carlo simulation (Wübbeler et al., 2008). This approach is based
on performing multiple simulation runs or schedule generations, while for each run, sam-
pling new duration values from the respective distributions. When sufficiently many runs
are performed, a distribution of the overall makespan forms, which gives rise to the prob-
ability for delays. For instance the probability of certain milestones being met can be
calculated on this basis.
In a more general context Chassein & Goerigk (2016) describe the problem of robust
optimization as a bicriterial problem in which the two criteria are formed by the average
and the worst-case performance. Both of these criteria should be subject to optimization.
Therefore multi-criteria optimization is an important part when aiming to take robustness
into account.
In the field of multi-critertia optimization the work of Pareto (1906) plays an important
role, as he defined the terms Pareto Optimality and weak Pareto Optimality. Lemmas 1
and 2 define the terms respectively.
Lemma 1. A point, x∗ ∈ X, is Pareto optimal iff there does not exist another point,
x ∈ X, such that F (x) ≤ F (x∗ ), and Fi (x) < Fi (x∗ ) for at least one function (Marler
& Arora, 2004).
Pareto optimal points are on the boundary of the space of feasible solutions. A point
is Pareto optimal if and only if there is no point having the same or a lower value for all
objectives F , but a lower value for at least one objective Fi . The set of points satisfying
that condition is called the Pareto Frontier. An example of such a frontier is illustrated
in Figure 3.2. The blue crosses are 50 random points and the red line shows the Pareto
frontier.
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
Figure 3.2: Plot of 50 random points and their Pareto frontier illustrated by the thick red
line.
.
Lemma 2. A point, x∗ ∈ X, is weakly Pareto optimal iff there does not exist another
point, x ∈ X, such that F (x) < F (x∗ ) (Marler & Arora, 2004).
A point is weakly Pareto optimal if and only if there is no other point that improves
on all objectives. All points that are Pareto optimal are also weakly Pareto optimal.
3.2 Objectives of Optimization and Schedule Metrics 55
Lemma 3. A point F o is an utopia point iff Fio = min Fi (x) ∀i (Yu & Leitmann, 1974).
x
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0 0.2 0.4 0.6 0.8 1
Figure 3.3: Plot of 10 random points with the light blue star at (0.2; 0.5) indicating the
compromise point. The thin blue lines indicate the minima for each objective, the red star
in the lower left is the utopia point.
.
For the weighted global criterion all objective functions are combined into a single func-
tion by a weighted exponential sum. The simplest form is given in Equation 3.2. It
involves a vector of weights w which is set a-priori in order to define the importance of
each objective function. The sum of all weights should equate to 1. Furthermore the pa-
rameter p is set proportional to the importance of minimizing the function with the largest
difference to the utopia value Fio (Marler & Arora, 2004). An extension of this method
was presented by Yu & Leitmann (1974) and is described in Equation 3.3. It does not
56 3 State-of-the-Art in Construction Schedule Optimization
require the objective function values to be strictly positive, as it makes use of the utopia
point F o in order to offset each objective function.
k
X
U1 = wi [Fi (x)]p , Fi (x) > 0 ∀i (3.2)
i=1
k
X p1
U2 = wip [Fi (x) − Fio ]p (3.3)
i=1
The simplest and most common approach to multi-objective optimization is the weighted
sum of the individual objectives as given in Equation 3.4. Each objective is assigned a
weight a-priori in order to set its importance. This approach is used for the objectives of
the uncertainty extension of the PSPLIB problem library described in Section 4.5.
k
X
U3 = wi Fi (x) (3.4)
i=1
In order to define precedence relationships among the activities different kinds of con-
straints can be used. While in Section 2.6.2.5 only one type of precedence relationship
3.3 Project Scheduling Problem 57
is described, there are multiple types of such constraints. Figure 3.4 shows the four ba-
sic types of precedence relationships with an illustrative feasible schedule, showing the
earliest possible start time for B, relative to A, for each one on the right side. The most
common and most intuitive precedence constraint is the Finish-Start (FS) constraint, re-
quiring the first activity to be finished for the next one to start. If not specified otherwise,
this type of precedence relationship is the default one used throughout this thesis. The
second type is the Finish-Finish (FF) relationship, requiring the second activity to finish
at earliest, simultaneously with the first activity. The Start-Finish (SF) constraint requires
the second activity to finish at earliest when the first activity starts. Finally the Start-Start
relationship requires the second activity to start at earliest when the first activity starts.
right scales the durations of the individual activities which are indicated by the width of
the individual boxes. Activity B1 has a duration of 4 time units, E11 and E12 have a
duration of 5 time units, all other activities have a duration of 6 time units.
Figure 3.5: Bored pile wall project example precedence constraints. The two bored pile
walls (B1 and B2) need to be finished before the first layer of excavation (E11 and E12),
after which the pile walls can be anchored (A1 and A2) followed by the second layer
of excavation (E21 and E22). Furthermore excavation activities E3 and E4 need to be
performed. When all excavation is finished the foundation (F) can be produced.
The shortest possible schedule for this problem is shown in Figure 3.6 as a Gantt dia-
gram. A Gantt diagram is a graphical representation of a schedule of activities over time,
as defined by Gantt (1903). The horizontal axis describes the time while the vertical axis
aligns the activities. Each line corresponds to exactly one activity. The arrows between
the activities illustrate the satisfied precedence constraints. It is visible that both strands
are executed in parallel as the precedence constraints in the example do not prevent that.
In order to generate solutions for project scheduling problems Kelley & Walker (1961)
described an approach, the Critical Path Method (CPM), which, while in itself not being
an optimization procedure, formed a basis for many other algorithms. The critical path
method is a two-pass approach consisting of a forward and a backward pass. In each
pass start and finish times for each activity are calculated. During the forward pass the
algorithm starts by selecting all start activities, which have no predecessors. In the case
of the example shown in Figure 3.5 these are B1,B2,E3, and E4. These activities are
3.3 Project Scheduling Problem 59
assigned an early start time tES of 0, and the early finish time tEF is set to the activity’s
duration added to its early start time. Those times are written on the upper left and upper
right of the activity’s boxes respectively. After that, all activities of which all predecessors
have been treated are selected and the maximum early finish time of all predecessors is
assigned as the early start time of each activity. The early finish time is assigned by adding
the duration. Figure 3.7 shows the example after the forward pass has been executed.
Figure 3.7: Bored pile wall project Example after the forward pass of the critical path
method.
60 3 State-of-the-Art in Construction Schedule Optimization
After the forward pass is completed, each activity has an early start and an early finish
time assigned. The second stage of the algorithm consists of the backward pass. For this
pass all precedence relationships are considered in reverse order. The latest of all early
finish times is considered to be project horizon. Therefore the latest activities, that have
no successors are selected initially and are assigned the latest early finish time as their
late finish times tLF . Their late start times tLS are calculated by subtracting the respective
duration from the late finish time. After that, all activities of which all successors have
been treated are selected and their late finish times are assigned to be the earliest late start
time of all successors. Their late start times are again calculated using the duration of the
activities. The late start and late finish times are written on the lower right and lower left
of the activities’ boxes respectively. This results in the state illustrated in Figure 3.8.
Figure 3.8: Bored pile wall project example after the backward pass of the critical path
method.
Eventually the total float time, or the slack, si can be calculated for each activity i.
This is done by subtracting the early start time and the duration from the late finish time.
The resulting value represents the amount of time the activity can be shifted or delayed
without changing the overall project duration. Table 3.2 shows the results of the algorithm
for the example. The bottom row shows the total float times for each activity. Each activity
that has a total float time of 0 is considered critical. This means that any delay in those
activities propagates through the entire schedule. The set of all activities with zero total
float time forms one or more critical paths. In this example the critical path is B2 → E12
→ A2 → E22 → F.
The resulting schedule is visualized in Figure 3.9. Each activity is either blue, which
indicates it is critical, or has a gray bar attached, indicating the total float time or the range
3.3 Project Scheduling Problem 61
Table 3.2: Results of the critical path method for the example in Figure 3.5.
the activity can be shifted in. As the excavation activities E3 and E4 are only required for
the final foundation activity F and do not depend on any other activities to be finished
they can be executed anytime before the foundation activity. Also, since the first bored
pile wall B1 is built much faster than the second one, as it contains fewer piles, it, and all
the follow up activities E11, A1, and E21, that are preceding F, have non-zero float time,
and hence are not critical.
Figure 3.9: Visualization of critical path and total float times for the example in Figure
3.5. The gray bars indicate total float times, while the blue activities are critical.
The critical path method is an exact procedure that generates a feasible schedule for
project scheduling problems with precedence constraints. It does however not consider re-
source sparsity, which is a commonly encountered constraint in real-life scheduling prob-
lems. While the project scheduling problem can be exactly solved using the described
methods, it does neglect the need for resources. Most construction activities require ma-
chines, workers, or geometric spaces. These demands add strong constraints in practical
scenarios, which need to be taken into account. Dori (2016) addressed the determina-
tion of float times and the critical path, taking resource constraints into account, which is
discussed in detail in Section 3.4.6.
62 3 State-of-the-Art in Construction Schedule Optimization
The Program evaluation and review technique (PERT) was developed by the US Navy
in order to forecast the research and development progress in large projects involving a
simple representation of uncertainty (Malcolm et al., 1959). Compared to the Critical
Path Method, PERT aims to find a schedule taking the expected duration of activities into
account. The method is based on modeling each involved activity by a set of parameters.
Those parameters include three time metrics which model the uncertainty of the activ-
ity durations, the optimistic duration tO , the most-likely duration tM , and the pessimistic
duration tP . From these three metrics, a fourth metric, the expected duration tE is cal-
culated using Equation 3.5, which is based on assuming that activity durations are beta
distributed as defined in Section 2.2.1 and tO and tP are the limits of the feasible range of
the distribution.
tO + 4tM + tP
tE = (3.5)
6
Furthermore each activity can have precedence constraints as defined in Section 3.3.1.
All input data is usually represented in tabular form. Such a representation, listing prece-
dence constraints and the four duration metrics for each activity is shown in Table 3.3.
Activity Predecessors tO tM tP tE
B1 3 4 6 4.17
B2 5 6 8 6.17
E11 B1 4 5 6 5
E12 B2 4 5 6 5
A1 E11 5 6 9 6.33
A2 E12 5 6 9 6.33
E21 A1 4 6 9 6.17
E22 A2 4 6 9 6.17
E3 4 6 9 6.17
E4 4 6 9 6.17
F E21,E22,E3,E4 5 6 7 6
Table 3.3: Tabular representation of the example in Figure 3.5 with added uncertainty.
From the data shown in the table, five time metrics are calculated, the early start time
tES , the late start time tLS , the early finish time tEF , the late finish time tLF , and the
slack time tS . The early finish time tEF of an activity is calculated by adding the expected
duration tE to its early start time tES . The early start time is the latest early finish time tEF
of all its predecessors. The latest finish time tLF of an activity is the smallest of all late
start times tLS of all its successors. Activities without successors have a late finish time
tLF equal to the largest early finish tEF time of all activities. The late start times tLS of the
activities are calculated by subtracting the expected duration tE from the late finish time
tLF . The slack time tS is calculated by subtracting the early start time tES from the late
start time tLS of each activity. The calculation of the early start times tES can be viewed
3.3 Project Scheduling Problem 63
as calculating the longest predecessor path towards this activity, while the calculation of
the late finish time tLF can be viewed as calculating the longest path along the successors
starting from the last activity. An activity with a slack time of zero is considered critical.
Table 3.4: Tabular representation of the results of applying PERT on the data from Table
3.3.
The results are either displayed in tabular form, as shown in Table 3.4, or in a so-
called PERT chart, which can be represented as an Activities-on-the-Node (AoN) or an
Activities-on-the-Arrows (AoA) network. For the AoN representation, nodes are com-
monly illustrated as is shown in Figure 3.10. An AoN representation of the anchored
bored pile wall project example is shown in Figure 3.11.
It becomes visible that the path from B2 towards F is critical, while all other activities
have non zero slack times. PERT is an early approach to managing large projects and, by
formalizing the defined metrics, as well as the PERT chart, offered new tools to project
managers. Furthermore through the incorporation of uncertainty, it allowed assessment
of the risk of delays, and therefore allowed to minimize costs. The calculation of the
slack time also helped to identify critical activities. Resource sparsity however, was in the
context of large scale military projects, not in the focus of the developers.
64 3 State-of-the-Art in Construction Schedule Optimization
Figure 3.11: Anchored bored pile wall project example solution in PERT notation. The
critical path, consisting of all activities with zero float time (B2 → E12 → A2 → E22 →
F), is highlighted in blue. The corresponding Gantt Chart is shown in Figure 3.9.
3.4 Resource Constrained Project Scheduling Problem 65
There are different ways of representing the entirety of a resource constrained project
scheduling problem. Due to the different possible cases, different notations are found
throughout literature. When the anchored bored pile wall example from Figure 3.1 is
extended by resources, a possible outcome is shown in Figure 3.12 (Davis, 1973).
Figure 3.12: Bored pile wall project example with precedence and resource constraints.
On the upper right of each of the activities’ boxes there is a vector of the number of
resources of each type that is required for the activity to be executed. For simplicity only
one type of resource, a worker, is considered. For the bored pile wall activities B1 and
B2, three workers are required. Each bored pile is created by drilling into the ground
by connecting parts of a metal casing to each other, to go deeper into the ground. One
worker is required for operating the drilling rig, one for operating the crane, delivering the
66 3 State-of-the-Art in Construction Schedule Optimization
casing parts, and one to guide the casing to connect to the previous part. The excavation
activities only require one worker for operating the excavator. The anchoring process
requires two workers for operating and guiding the anchoring machine. The foundation
process requires three workers, adding reinforcement material and concrete.
Another, more intuitive way of representing resources graphically, is the notation
shown in Figure 3.13. In this notation resources are shown as rounded rectangles and
dashed arrows towards these rectangles indicate resource requirements. The numbers on
the arrows indicate the number of instances required. For the anchored bored pile wall
project example, the worker resource is added on the top side of the figure and each ac-
tivity indicates its requirements by a dashed arrow.
Figure 3.13: Bored pile wall project example precedence and resource constraints.
Additionally the activities’ boxes can be colored to indicate the required resources,
furthermore geometry and their overlaps can be indicated by a rhombus shape, and a
hexagon indicates material demands. A general representation of such a problem is illus-
trated in Figure 3.14.
In order to determine how hard a RCPSP is, prior to solving it, it is useful to analyze
it. This Section presents commonly used metrics for characterizing RCPSPs. A number
of indicators characterizing the structure of scheduling problems have been published in
literature. Patterson (1976) defined a number of such indicators and analyzed their impact
on the performance of (meta-)heuristic algorithms. The most commonly used indicator
among which is the resource constrainedness. At the same time Cooper (1976) defined the
resource factor, as well as the resource strength. The resource constrainedness is defined
as the ratio γk of the average of the quantities rik of resource k required by activity i,
3.4 Resource Constrained Project Scheduling Problem 67
requiring at least one unit, as defined in Equations 3.6 and 3.7, and the total number of
units available of that resource rk , resulting in Equation 3.8 (Patterson, 1976).
(
1 if rik > 0
ρik = ∀k ∈ R, ∀i ∈ J (3.6)
0 otherwise
X
rik
i∈J
φk = X ∀k ∈ R (3.7)
ρik
i∈J
φk
γk = ∀k ∈ R (3.8)
rk
The resource factor λi , as defined in Equation 3.9 by Cooper (1976), is the ratio of the
average number of resources required for each activity and the total number of resource
types |R|.
XX
ρik
i∈J k∈R
λi = (3.9)
N · |R|
The ratio of the number of resources available of type k, rk and the average number
required by all activities Φk , as defined in Equation 3.10 is called the resource strength
ξk . It is calculated as is shown in Equation 3.11 (Cooper, 1976).
X
rik
i∈J
Φk = ∀k ∈ R (3.10)
N
68 3 State-of-the-Art in Construction Schedule Optimization
rk rk · N
ξk = =X ∀k ∈ R (3.11)
Φk rik
i∈J
For a more detailed overview of problem characteristics the reader is referred to the
papers by Patterson (1976) and Cooper (1976).
It is now assumed that only 4 workers are available, which for the schedule illustrated in
Figure 3.6 results in the resource profile shown in Figure 3.15. The vertical axis indicates
the number of workers required, while the horizontal axis represents the time. The red line
marks the resource limit of four workers. Such a resource profile is created by stacking all
activities that require the analyzed resource on top of each other. In this case the activities
E11, A1, and E21 are split into two boxes, to prevent gaps in the profile. It is visible that
this resource limit is exceeded by the parallel execution of the bored pile wall activities
B1 and B2 on the very left.
Therefore, in order to not exceed the four worker resource limit, the activities B1 and
B2 cannot be executed in parallel. A schedule preventing that is illustrated in Figure 3.16.
Activity B1 is executed first, followed by activity B2. As illustrated in Figure 3.17, the
resource limit is not exceeded at any time throughout the schedule.
However, if instead of B1, the other bored pile wall B2 would be produced first, this
would result in the schedule shown in Figure 3.18, which also does not exceed the re-
source limit, as shown in Figure 3.19. The new schedule is more effective than the previ-
ous one, shown in Figure 3.16, as the duration is shorter. Also it is visible in the resource
profile, that the idle times of workers are reduced. This illustrates how choices in the or-
ders of activities result in different schedules with different makespans. Finding the order
of activities minimizing some objective is in the focus of this Section.
3.4 Resource Constrained Project Scheduling Problem 69
Figure 3.16: Bored pile wall project example solution with resource constraints.
Figure 3.18: Bored pile wall project example solution with resource constraints.
Sprecher et al. (1995) described four different types of schedules, feasible, semi-active,
active, and non-delay schedules2 . In order to illustrate each of the schedule types, the
anchored bored pile wall project example has been modified, in order to clarify the differ-
ences. The resource limit was reduced to three workers, the duration of E12 was reduced
to four time units, and the anchoring activity A2 requires one additional worker. To be
able to understand the differences among the schedule types, two schedule operations
have to be introduced, the local and the global left shift.
A local left shift continuously moves an activity to the left, to an earlier time, without
at any time during the shift creating an unfeasible schedule. In Figure 3.20 shifting activity
E3 from time t2 left towards t1 illustrates such a local left shift. E3 can be placed anywhere
between the two time points without violating any constraints.
A global left shift can pick up any activity and move it to another position that is
earlier in time. At the final position no constraint may be violated, but it is not required
to only pass feasible solutions during the move. In the example moving activity E4 from
time t4 to time t3 illustrates a global left shift. It is feasible to execute the activity at both
points in time, but it cannot be scheduled in between, as in combination with the activity
B2 it would exceed the resource limit.
Figure 3.20: Illustration of global and local left shift operations. Left shifting E3 to t1 is
a local left shift (Labeled l), while shifting E4 to t3 is a global left shift (Labeled g).
2
This Section is based on the description by Sprecher et al. (1995)
72 3 State-of-the-Art in Construction Schedule Optimization
Figure 3.21 shows a feasible schedule for the problem. A feasible schedule is any
schedule satisfying all precedence and resource constraints. As is shown in Figure 3.21
the schedule may contain gaps and it may therefore be possible to locally or globally left
shift activities.
Figure 3.21: Feasible schedule for anchored bored pile wall project example.
A semi-active schedule is a schedule were none of the activities can be locally left
shifted. As is illustrated in Figure 3.22 such a schedule may not contain any gaps, but it
may be possible to globally left shift activities, such as activity E4 can be moved to start
right after activity E12. If no activity can be locally or globally left shifted , the schedule
is called an active schedule. An active schedule is not necessarily an optimal schedule.
Figure 3.22: Semi-active schedule for anchored bored pile wall project example.
A non-delay schedule is a schedule where each activity is started at the earliest point
where it becomes resource and precedence feasible. Such a schedule is shown in Figure
3.23. Compared to the schedule in Figure 3.22 the activity E4 becomes precedence and
resource feasible right after the activity E12 finished and therefore has to be scheduled at
that time. A way to verify whether a schedule is a non-delay schedule is to convert the
schedule to a schedule for the corresponding unit-time-duration RCPSP (UTDRCPSP),
where each activity has the exact same duration. For the schedule in Figure 3.23 this is
shown in Figure 3.24. If the original schedule is a non-delay schedule, then the unit-time
transformed schedule is an active schedule (Sprecher et al., 1995).
To show the shortcomings of non-delay schedules, Figure 3.25 shows an active sched-
ule that is much shorter than the non-delay schedule in Figure 3.24. It is visible that there
is a short gap right after activity E12 is finished. At this point activities E3 and E4 are
both precedence and resource feasible, and in a non-delay schedule have to be executed at
that point, creating a suboptimal schedule. It is therefore important to not only consider
non-delay schedules when searching for optimal schedules, as delaying an activity can
3.4 Resource Constrained Project Scheduling Problem 73
Figure 3.23: Non-delay schedule for anchored bored pile wall project example.
Figure 3.24: Unit time duration transformation of the schedule in Figure 3.23
yield an overall shorter schedule. Lemma 4 describes how the different schedule types
are related in terms of set theory.
Figure 3.25: Active schedule for anchored bored pile wall project example.
Lemma 4. Let S denote the set of all schedules, SF the set of feasible schedules, SSA the
set of semi-active schedules, SS the set of active schedules, and SN D the set of non-delay
schedules. Then following holds:
SN D ⊆ SA ⊆ SSA ⊆ SF ⊆ S (Sprecher et al., 1995).
For the purpose of generating schedules, different schedule generation schemes exist.
To be able to explain these different scheduling schemes, priority values were assigned
to each activity in the anchored bored pile wall project example as shown in Figure 3.26.
The numbers on the right, inside each of the activities’ box indicates the respective priority
value. The number on the upper right of each box indicate the number of workers required
for the activity. There is a total of three workers available. The priorities were assigned in
an arbitrary way and form the basis for optimization procedures. Logically it makes sense
74 3 State-of-the-Art in Construction Schedule Optimization
to have the priorities respect the precedence relationships, because an activity can never
be executed prior to its predecessors. The scheduling schemes select the lowest priority
values first. This means that a lower priority value implies a higher priority. This choice
will be motivated in Section 4.1.
Figure 3.26: Priorities assigned to each activity of the anchored bored pile wall project
example.
Most commonly the serial schedule generation scheme is used, because it can be used
to represent the entire space of active schedules using priority rules (Kolisch, 1996b).
A priority rule is based on a priority value assigned to each activity. Depending on the
priority rule an activity is selected to be scheduled next. For simplicity all priority rules in
this paper select the lowest priority value first. An overview of priority rules is presented
in Section 3.4.8.1.
In the serial schedule generation scheme the priorities define an absolute order in
which the activities are dealt with. The order of the priority values must not violate any
precedence constraints. This means that no activity preceding another activity can have a
higher priority value. Hence the preceding activity is always selected first. The schedule
generation scheme then schedules the activities in the exact order of the priorities, re-
specting resource constraints. If resources are unavailable at the end time of the previous
activity, it is moved forward until the activity blocking the resources has finished. The
procedure is listed in Algorithm 1. The scheme is named serial, as only one activity is
considered for scheduling at a time.
Lemma 5. All active schedules can be generated by the serial schedule generation scheme
(Kolisch, 1996b).
Figure 3.27 shows a schedule generated for the problem with the priorities shown
in Figure 3.26. The bottom part of the figure shows the corresponding resource profile.
3.4 Resource Constrained Project Scheduling Problem 75
Figure 3.27: Schedule and resource profile created by the serial schedule generation
scheme based on the priorities assigned in Figure 3.26. The critical path is highlighted in
blue.
.
The schedule is generated by treating the activities in the exact order of the priorities as
follows:
1. The activity B1 has the lowest priority value and no predecessors. All three workers
are available, therefore the activity is scheduled at time t0 .
76 3 State-of-the-Art in Construction Schedule Optimization
2. E11 has the lowest priority. it is only precedence feasible after B1 has finished. It
is scheduled right after B1 at t1 .
3. A1 requires E11 to be finished. It is scheduled right after E11 at t3 .
4. E21 succeeds A1 and is scheduled right after at t5 .
5. B2 has lowest priority value of the remaining activities. Neither with B1, nor with
E11 does it exceed the resource limit of three workers. Therefore it can be sched-
uled at time t0 .
6. E12 requires B2 to be finished and exceeds the resource limit in combination with
E11. It does however not exceed the limit togethet with A1. The earliest feasible
start time is therefore directly after E11 has finished at t3 .
7. A2 required E12 and is resource feasible in parallel with A1 and E21. It can there-
fore be scheduled directly after E12 at t4 .
8. E22 succeeds A2 and does not exceed the resource limit when combined with E21.
It is scheduled directly after A2 at t6 .
9. E3 does not have predecessors. As it has a longer duration than B1 it can not be
scheduled at time 0. Also it is longer than E12 and therefore does not fit parallel to
that activity either. The earliest slot with one free worker is in parallel with A2 at
t4 .
10. E4 can only be scheduled in parallel to E22 at t6 .
11. As F required E4 to be scheduled, the only possible time is in the end at t8 .
12. No more activities are left to be scheduled.
In contrary to the serial scheme, the parallel schedule generation scheme, also known
as the Brooks Algorithm (BAG), which was first defined by Brooks & White (1965) and
extended by Whitehouse (1979), generates schedules by time increments. At each discrete
time instance, where a previously scheduled activity ends, a list of unscheduled activities
with satisfied resource and precedence constraints is generated, similar to the constraint-
based simulation approach by König & Beißert (2009), as described in Section 2.6.4. The
next activity to be scheduled at this time is selected from this list by a priority rule. The
scheduling procedure is listed in Algorithm 2.
Lemma 6. All schedules generated by the parallel schedule generation scheme are non-
delay schedules (Kolisch, 1996b).
Figure 3.28 shows a schedule generated for the problem with the priorities shown
in Figure 3.26. The bottom part of the figure shows the corresponding resource profile.
The schedule is generated by discrete time increments and selecting currently feasible
activities by choosing the activity with the lowest priority value:
3.4 Resource Constrained Project Scheduling Problem 77
6. A1 ends at time t5 .
E21 is the only feasible activity and is therefore scheduled.
7. A2 ends at time t6 .
E22 is now feasible and scheduled.
78 3 State-of-the-Art in Construction Schedule Optimization
Figure 3.28: Schedule and resource profile created by the parallel schedule generation
scheme based on the priorities assigned in Figure 3.26. The critical path is highlighted in
blue.
.
When comparing the results of both schedule generation schemes in Figures 3.27 and
3.28, it can be observed that the main difference lies in the gaps within the resource
profile of the serial schedule between times t0 and t1 , and t2 and t3 . Those gaps cannot be
created in the parallel scheme as activities E3 and E4 are used to fill the gaps. This also
gives rise to the naming of the parallel scheme, as it maximizes parallel execution in a
greedy way. Therefore the parallel schedule generation scheme can only create non-delay
schedules, as is stated in Lemma 6. It can also be observed that both schedules form a
3.4 Resource Constrained Project Scheduling Problem 79
different critical path, which is highlighted in blue in both schedules, and therefore have
different makespans.
Generally the serial schedule generation scheme has higher computational demands
because it needs to search for the earliest precedence- and resource-feasible start time for
every activity, while the parallel scheme performs time increments. However, the run time
complexity for both schemes is O(N 2 · |R|) (Kolisch & Hartmann, 1999; Pinson et al.,
1994), since in the worst case each activity has to be started at an individual time instance.
In practical cases, the runtime of the serial scheme is much higher, as is illustrated in Fig-
ure 3.29. It shows the simulation run time of both scheduling schemes for 1000 randomly
generated problems consisting of up to 1000 activities.
400
Parallel
Serial
300
Simulation run time in ms
200
100
0
0 200 400 600 800 1000
Number of activities
Figure 3.29: Comparison of performance of the serial and parallel schedule generation
schemes with random problems with up to 1000 activities.
.
The author of this thesis has developed the new pseudo-serial schedule generation
scheme, which is presented in Section 4.1. It combines the advantages of both the serial
and the parallel schedule generation schemes.
on one job at a time, as well as one job can only be operated on by one machine at a
time. The objective is to find job sequences that minimize the makespan of all operations
(Błażewicz et al., 1996). Same as the resource constrained project scheduling problem,
the job shop scheduling problem is NP-hard (Lenstra & Rinnooy Kan, 1979). The main
difference between the problems is, that the job shop scheduling problem involves a set of
stationary machines where jobs pass through, whereas the RCPSP involves one large job
consisting of many activities (operations) that need to be performed involving resources
(machines). Furthermore, in the RCPSP, multiple resources can be required for a single
activity. Kiran & Smith (1984) presented an overview of simulation based methods for job
shop scheduling. For RCPSPs the simulation-based approaches merely use the simulation
model to generate schedules based on predetermined parameters, such as priority values,
that are set by a heuristic algorithm, such as those described in Section 3.4.8.
The first simulation model that was widely applied to construction operations was
CYCLONE, which is based on the modeling approach presented in Section 2.5.4 (Halpin,
1977). Later STROBOSCOPE was developed, which is based on a similar modeling
approach, which is presented in Section 2.5.5 (Martinez & Ioannou, 1994). The ABC
approach of Shi (1999) was presented in Section 2.5.6. All those approaches focus on
cyclic operations, such as earthworks, or the bored drill pine example. Using them as a
model for an entire construction project causes the model to become highly complex.
Simulation based methods for RCPSPs only got into focus of research as sufficient
computational power for simulating large construction projects became available. König
et al. (2007) introduced the constraint-based simulation approach in order to create sched-
ules for civil engineering projects and ship construction. The constraint-based simulation
approach has strong similarity to the parallel schedule generation scheme presented in
Section 3.4.4.3, but features some key differences. The main difference is that the next
activity to be scheduled is selected randomly. Through multiple simulation runs in a
Monte Carlo way (Section 2.2.6), well-performing schedules are generated. Additionally,
incorporating soft-constraints allows to include experiences from similar projects, human
strain factors, and preferences of the planners (Beißert et al., 2010). As these constraints
do not strictly need to be satisfied, they are modeled as soft-constraints. The simulation
concept is illustrated in Figure 3.30.
Additionally König’s simulation concept involves productivity of workers and ma-
chines under different circumstances, that may influence the durations of activities, as for
instance, having to perform an activity in a confined space may reduce efficiency. Further-
more the concept was extended by incorporating uncertainty in order to create schedules
which maximize robustness (König, 2011a,b). Moreover the feedback of actual progress
data from construction sites was enabled by Szczesny & König (2015). It involves updat-
ing a given schedule based on real-time data, and if considered necessary, create a new
schedule using evolutionary algorithms in combination with König’s simulation approach.
Updating may be required due to extensive delays of the involved activities, which is dis-
cussed in more detail in Section 4.6.
3.4 Resource Constrained Project Scheduling Problem 81
Figure 3.31: Schedule from Figure 3.27 with sequence enforcement constraints added by
the Dori-Algorithm depicted as thick red arrows.
.
3.4 Resource Constrained Project Scheduling Problem 83
Figure 3.32: Schedule resulting from backward simulation run according to the Dori-
Algorithm based on the sequence enforcement constraints in Figure 3.31.
.
Figure 3.33: Float times and critical path resulting from Dori-Algorithm for the schedule
in Figure 3.31.
.
84 3 State-of-the-Art in Construction Schedule Optimization
Since the resource constrained project scheduling problem is NP-Hard in the strong sense
(Blazewicz et al., 1983), deterministic approaches are hardly feasible for large real-life
problems. However, for completeness, the theoretical approaches for deterministic RCPSP
solutions are described within this Section3 . Talbot & Patterson (1978) defined a con-
ceptual linear program in order to solve the project scheduling problem without taking
resource constraints into account. This linear program is shown in Equation 3.12. This
definition is forms a basis for the definitions of the RCPSP and is therefore described at
this point. It introduces two artificial activities called dummy activities 0 and J + 1. Ac-
tivity 0 is the start activity and it precedes all other activities while activity J + 1 is the
final activity which succeeds all other activities. Both dummy activities have a duration of
0. The objective of the linear program is to minimize the starting time FJ+1 of the finish
activity as is shown in Equation 3.12a. Equation 3.12b defines all precedence constraints
Vij . It defines that the finish time of activity i, Fi must be smaller or equal to the finish
time Fj of activity j, minus the duration dj of activity j, given that there is a precedence
constraint that defines activity j to be a successor of activity i. Equation 3.12c defines the
start activity to finish at time 0, while Equation 3.12d defines the finish times to be larger
or equal than 0 for the set of all activities J . Equation 3.12e defines that all finish times
must be smaller or equal to some predetermined project horizon, as for instance the sum
of all activities’ durations.
Pritsker et al. (1969) however defined a different approach to solve PSPs using linear
programs involving time indexing. They therefore introduced a binary variable xit as
defined in Equation 3.13 that is 1 if and only if activity i starts at time t. Ei is the set
of possible start times of activity i. This set can be determined for each activity using
the critical path method as described in Section 3.3.3. The sum defined in Equation 3.14
therefore results in the time Fi , where activity i finishes, as only one xit can be 1 for
each activity i, as is defined in the constraint defined in Equation 3.15b. The objective
function is equal to the finishing time of the final dummy activity as defined in Equation
3.15a. Equation 3.15b defines the precedence constraints equally to Equation 3.12a after
substitution according to Equation 3.14. Equation 3.15d defines that the initial dummy
activity start at time 0 and Equation 3.15e defines that all xit are binary variables.
3
This Section was inspired by the description by Bauer (2009)
3.4 Resource Constrained Project Scheduling Problem 85
(
1 if t = Fi
xit = (3.13)
0 otherwise
X
Fi = t · xit (3.14)
t∈Ei
X
minimize t · xJ+1,t (3.15a)
t∈EJ+1
X
subject to xit = 1 ∀i ∈ J (3.15b)
t∈Ei
X X
t · xjt − t · xit ≥ dj ∀hi, ji ∈ Vij (3.15c)
t∈Ej t∈Ei
x00 = 1 (3.15d)
xit ∈ {0, 1} ∀i ∈ J ; ∀t ∈ Ei (3.15e)
This time indexed notation allows the definition of an additional constraint modeling
resource limitations as defined in Equation 3.17d. R is the set of all resources and T
is the set of all time periods. kr is the limit of resource r and kir is the consumption of
resource r for activity i per time period. Equation 3.16 defines kir as the overall demand
uir of resource r by activity i divided by the duration di of activity i. Qit is the set of time
periods where activity i is in progress if it was started at time t. Therefore the constraint
in Equation 3.17d prevents any resource limit to be exceeded at any time.
uir
kir = (3.16)
di
X
minimize t · xJ+1,t (3.17a)
t∈EJ+1
X
subject to xit = 1 ∀i ∈ J (3.17b)
t∈Ei
X X
t · xjt − t · xit ≥ dj ∀hi, ji ∈ Vij (3.17c)
t∈Ej t∈Ei
X X
kir xiτ ≤ kr ∀r ∈ R; ∀t ∈ T (3.17d)
i∈J τ ∈Qit
x00 = 1 (3.17e)
xit ∈ {0, 1} ∀i ∈ J ; ∀t ∈ Ei (3.17f)
When examining the linear program defined by Equation 3.17 several downsides be-
come clear. For activities with very long durations the set T can become very large. This
86 3 State-of-the-Art in Construction Schedule Optimization
can be counteracted by setting the length of a time period to the greatest common divi-
sor of all activities’ durations. However, for instance two activities with durations 1 and
10000 would still require 10000 time periods to be defined. Each time period in T cre-
ates one constraint for each resource in R and each activity in J creates one variable xit
for each possible start time in Ei . This causes the linear program for larger problems to
become extremely large. This illustrates why heuristic procedures are of high interest for
research.
As shown in the previous Section, deterministic solutions for large scale problems are
hardly feasible, and therefore heuristic procedures are highly interesting for practical
problems. This Section provides an overview of the state-of-the-art of heuristics for
resource-constrained project scheduling problems4 .
A priority rule assigns a value v(i) to each activity i and states whether the minimum
or maximum value is to be selected first (Kolisch & Hartmann, 1999). Additionally a
way to handle ties needs to be defined. The priority rules can be used by either of the
schedule generation schemes defined in Section 3.4.4 but could also be incorporated into
the constraint-based discrete-event simulation approach by Beißert et al. (2007), which
is described in Section 2.6.4. Özdamar & Ulusoy (1995) defines the difference between
the scheduling schemes as the employment of either fixed a-priori priorities (serial) or
dynamically updating priorities (parallel). In this thesis priorities are always assumed to
be set a-priori, but for the serial schedule generation scheme need to be in precedence
feasible order.
The priorities can either be assigned randomly and used as a basis for optimization
procedures, or be calculated based on different criteria. The individual rules can be clas-
sified based on the information used to calculate the priority values v(i), which can be
network, time, and resources, as well as lower and upper bounds (Kolisch & Hartmann,
1999). For an overview of many different priority rules the reader is referred to the papers
by Davis & Patterson (1975), Kolisch (1996a), and Hartmann & Kolisch (2000).
X-Pass methods deploy different priority rules in combination with different schedule
generation schemes on the problem. The simplest way are single pass methods which
employ exactly one priority rule in combination with one schedule generation scheme.
Multi-pass methods can be created by different combinations of schedule generation
4
This Section was in part inspired by the review papers by Hartmann & Kolisch (2000); Kolisch &
Hartmann (1999, 2006).
3.4 Resource Constrained Project Scheduling Problem 87
Glover (1989, 1990a,b) introduced the method of tabu search which can be applied to
resource constrained project scheduling methods. It is basically a steepest decent neigh-
borhood search. Starting from an initial solution, neighboring solutions are analyzed for
the best solutions. As it does not strictly require improving solutions, it is possible that
the algorithm starts to cycle. To prevent this, Glover introduced a kind of memory, re-
membering previously encountered solutions to prevent going back to a recently visited
solution. This memory can be depleting, such that only a certain number of previously
visited solutions are stored in order to save memory and computational effort. An appli-
cation of tabu search for resource constrained project scheduling was published by Baar
et al. (1999).
Kirkpatrick et al. (1983) introduced the algorithm called simulated annealing, combining
combinatorial optimization and statistical mechanics into a new metaheuristic optimiza-
tion algorithm. It is based on the process of the annealing of metals where free floating
particles cool down and lock into a state of low energy. After an initial solution is gen-
erated, the neighborhood is analyzed for other solutions. If a better solution is found it is
always accepted as the new current optimum. If no better solution is found, but a worse
one, it is only accepted with a probability depending on the quality of the solution and the
current temperature of the simulated cooling matter. The temperature Tt is lowered during
the execution of the algorithm until it is cooled down, according to a cooling scheme. The
probability P (y|x) of accepting a new solution y, given the current solution x is defined
in Equation 3.18, where f (x) is the fitness of solution x.
(
1 if f (y) ≤ f (x)
P (y|x) = (3.18)
exp(− f (y)−f
Tt
(x)
) if f (y) > f (x)
Holland (1975) introduced a new nature inspired metaheuristic algorithm which was in-
spired by adaption and evolution in biology. While the simulated annealing algorithm
works on improving a single solution, a genetic algorithm works on a population of solu-
tions which are improved using a set of operators which either modify (mutate) a member
of the population or mate (crossover) two members of the population. The population is
usually kept at a certain number of members. When through modification that number
is exceeded, the weakest members of the population die out. Weakness in this context
is commonly determined by a fitness function. In the scheduling domain, the weakest
solutions would be those having the longest makespan. Genetic algorithms were applied
to resource constrained project scheduling problems by Hartmann (1998), Toklu (2002),
Valls & Ballestín (2002), Senouci & Eldin (2004), Debels & Vanhoucke (2005), Debels
et al. (2006), Valls et al. (2008), Long & Ohsato (2009), König (2011a), and Ghoddousi
et al. (2013).
Dorigo et al. (1996) introduced the ant colony optimization algorithm as a new meta-
heuristic. The idea of the algorithm is to simulate a population of agents driven by a
greedy force. Through the interaction of the agents by so-called pheromones, the per-
formance of the algorithm was improved significantly over the performance of a single
agent. The algorithm was adapted to resource-constrained project scheduling problems
by Merkle et al. (2002). They used the serial schedule generation scheme and pheromones
reflect the expected efficiency of putting certain activities in specific places using the lat-
est start time priority rule, where activities are ordered by their latest start time according
to the critical path method described in Section 3.3.3.
Local search is a procedure that starts from random points in the search space and tries to
find improved solutions by local modifications of the solutions. In the case of scheduling
this is usually done by modifying the priority values. These methods include procedures
such as the previously introduced simulated annealing and tabu search methods. Simu-
lated annealing is a stochastic local search procedure as defined by Hoos & Stützle (2004).
Fleszar & Hindi (2004) use a variable version of neighborhood search applied to RCPSPs
based on the serial schedule generation scheme. Valls et al. (2003) uses a topological or-
der representation of the schedules encountered, combining local search with tabu search,
while also using the serial schedule generation scheme.
Another possibility for local search algorithms is the use of swaps between two activi-
ties, that are precedence feasible, modifying an existing schedule (Bügler & Borrmann,
2014; Bügler et al., 2013; Dori, 2016). A swap is commonly performed by interchanging
3.4 Resource Constrained Project Scheduling Problem 89
the priorities of two activities. A special kind of swap, a cyclical shift, was used in the
simulated annealing algorithm by Bouleimen & Lecocq (2003). The effect of a normal
swap differs depending on the schedule generation scheme that is used. The schedule
generation schemes were introduced in Section 3.4.4. In order to illustrate the effects of
swapping activities a simplified version of the bored pile wall project example is intro-
duced in Figure 3.34. It features two bored pile walls B1 and B2, both requiring a drilling
rig, which is colored in yellow. Furthermore two excavation activities E1 and E2 need to
be scheduled, requiring an excavator, which is colored in green. Finally a foundation can
be build, which is not assigned any resources in this example. There is only one drilling
rig and one excavator available, hence neither B1 and B2, nor E1 and E2 can be executed
in parallel.
Figure 3.34: Simplified bored pile wall project example with priorities on the right inside
of each activity’s box.
Figure 3.35 shows an example schedule for the simplified bored pile wall project
example in Figure 3.34. The schedule could have been create by either of the two schedule
generation schemes based on the priorities defined a-priori.
Figure 3.35: Example schedule for simplified bored pile wall project example in Figure
3.34.
Figure 3.36 shows the schedule after E1 and E2 have been swapped. This swap how-
ever only has an effect using the serial schedule generation scheme. When activity B1
90 3 State-of-the-Art in Construction Schedule Optimization
ends, E1 immediately becomes resource and precedence feasible and therefore is sched-
uled at this very time instance. In the serial scheme however, E2 is scheduled at the
earliest resource and precedence feasible time, right after B2 finished, leaving insufficient
room for E1 and therefore pushing it to start after E2. This creates a longer schedule as
the initial one.
Figure 3.36: Example schedule from Figure 3.35 after swapping E1 and E2 using the
serial schedule generation scheme.
When an additional swap is performed, switching the priorities of B1 and B2, this
results in the schedule shown in Figure 3.37. This schedule would, in this priority config-
uration be equal for both scheduling schemes. The resulting schedule is shorter than the
initial one and is actually an optimal solution for the problem.
Figure 3.37: Schedule from Figure 3.35 after additionally swapping B1 and B2.
When swapping B1 and B2 first, however, using the serial scheduling scheme, a longer
schedule is created, similar to the one in Figure 3.36. This is shown in Figure 3.38.
When on the other hand the parallel scheme is used for the exact same swap, the
schedule collapses into the optimal solution right away, as is shown in Figure 3.39. This
is due to the property of the parallel schedule generation scheme stating that it only creates
3.4 Resource Constrained Project Scheduling Problem 91
Figure 3.38: Example schedule from Figure 3.34 after swapping B1 and B2 using the
serial schedule generation scheme.
non-delay schedules and therefore minimizes idle times in a greedy manner. This however
is not always optimal as is shown in the following example.
Figure 3.39: Example schedule from Figure 3.34 after swapping B1 and B2 using the
parallel schedule generation scheme.
When viewing another modification of the example, where instead of the second bored
pile wall, a sheet pile wall is created, and therefore a piledriver is required for the activity
P1. The modified problem, with activity priorities, is shown in Figure 3.40.
A schedule, resulting from the priorities in Figure 3.40, is shown in Figure 3.41. This
schedule would result from both scheduling schemes, given the assigned priorities.
However, wehen E1 and E2 are swapped, this does not have any effect on the result
of the parallel schedule generation scheme, as E2 is the only precedence and resource
feasible activity at t1 , when P1 is finished. In the serial schedule generation scheme on
the other hand, E1 will be scheduled at time t2 , right after B1 has finished, and in the
result create a much shorter schedule, as shown in Figure 3.42. Due to the unnecessary
idle time of the excavator between times t1 and t2 , this is not a non-delay schedule and
therefore can in no way be generated by the parallel schedule generation scheme.
92 3 State-of-the-Art in Construction Schedule Optimization
Figure 3.40: Simplified bored pile wall and sheet pile wall project example with priorities
on the right inside of each activity’s box.
Figure 3.42: Schedule for the example in Figure 3.40 after swapping E1 and E2, when
using the serial schedule generation scheme.
Lemma 7. Every active schedule can be reached from any other schedule by applying a
3.4 Resource Constrained Project Scheduling Problem 93
Lemma 8. Every non-delay schedule can be reached from any other schedule by applying
a series of swaps using the parallel schedule generation scheme.
Dori (2016) defined the notion of reasonable swaps, as swaps between activities which
share the usage of a common resource, and cause an actual change in the resulting sched-
ule. Whether a swap results in a different schedule depends on the scheduling scheme
that is used. As the work of Dori (2016) builds up on the constraint-based simulation by
König et al. (2007), the definition of reasonable swaps is based on the parallel schedule
generation scheme as defined in Section 3.4.4.3.
The parallel schedule generation scheme allows the collection of such swaps during
schedule generation with minimal effort. Every time during the schedule generation pro-
cess, the eligible set E(g) is updated, after an activity was scheduled. When due to the
scheduled activity, activities previously in the eligible set, were removed, this must be
caused by resources shared by the executed activity and the removed one, as the prece-
dence feasibility cannot have changed at this point. Therefore any swap between the
scheduled activity and any subsequently removed activity is reasonable according to the
definition by Dori (2016).
The notion of reasonable swaps can be used as a basis for several metaheuristic algo-
rithms. Dori (2016) calculated the total possible number of swaps Ψ as defined in Equa-
tion 3.19 based on the number of activities N . This results in a total possible number of
resulting schedules |S| as defined in Equation 3.20.
N2 − N
Ψ= (3.19)
2
Ψ
X 1
|S| = Ψ! · (3.20)
i=0
i!
Figure 3.43: Tree created from swaps for a problem with 5 activities. Each node contains
a vector of priorities. One pair of priorities is swapped along each arc.
In summary Dori (2016) has shown that swap-based local search algorithms are an
effective heuristic search procedure for RCPSPs. A downside is the humongous size of
the resulting search trees, making the search within the tree very tedious. This issue was
addressed by the author of this thesis when developing the iterative swap-based limited
depth search algorithm, that is presented in Section 4.3.
This Section provides a tabular overview of the different published heuristic algorithms.
The algorithms are classified into the previously described categories and sorted by publi-
cation date. Additionally the table includes many review papers, that provide comparisons
between the different published algorithms.
3.4 Resource Constrained Project Scheduling Problem 95
Table 3.5: List of heuristic optimization procedures for the resource constrained project
scheduling problem.
96 3 State-of-the-Art in Construction Schedule Optimization
3.4.9 Benchmarks
3.5 Summary
Construction schedule optimization forms a vital part of the work presented in this the-
sis because proper reaction to changing circumstances requires optimization procedures
to analyze the space of feasible schedules for good solutions. This Chapter described in
detail, the state-of-the-art in construction schedule optimization. First the possible objec-
tives and schedule metrics were introduced. Additionally the introduction of uncertainty
and robustness into the problems was discussed. As the objectives, including robustness,
are often antithetic or create trade-off problems, the concept of multi-criteria optimization
was introduced. Through the project scheduling problem, which does not take resources
into account, the resource constrained project scheduling problem was introduced. Based
on an illustrative example, the different schedule generation schemes were discussed.
Problem characteristics were compared and simulation-based approaches, such as the
constraint-based discrete-event simulation method and the Dori algorithm were intro-
duced. A detailed overview of heuristic approaches from literature and the introduction
of the commonly used benchmarks were discussed. This Chapter illustrated that there is
a very large basis of methods for solving RCPSPs in literature, but also pointed out some
research voids that are addressed in the following Chapter of this thesis.
97
"In preparing for battle, I have always found that plans are useless but plan-
ning is indispensable." - Dwight D. Eisenhower
The main contributions presented in this Chapter consist of a new schedule generation
scheme for resource constrained project scheduling problems, the adaption of the water
wave optimization algorithm by Zheng (2015) to this problem, and a swap-based iterative
limited depth search algorithm. The new algorithms offer new tools for planners to gener-
ate near-optimal schedules. Additionally the methods can be used to automate the process
of updating schedules based on newly obtained data in a reactive manner. Furthermore
the author developed a uncertainty extension of the project scheduling problem library
PSPLIB. This extension allows scheduling algorithms to be compared on problems in-
volving uncertain performance factors or activity durations. Such problems are common
when modeling real-life construction sites and uncertainty is often generated as a product
of data acquisition when maintaining a reactive simulation model. This Chapter discusses
each of these contributions in the respective Sections.
by Chan et al. (1996). This however populates the search space with numerous duplicate
solutions for different input vectors and reduces the continuity of the search space.
The parallel schedule generation scheme does not suffer from this discrepancy, but
does on the other hand not allow the generation of all active schedules, but only non-delay
schedules. Those schedules are formed by time increments, selecting feasible activities
by assigned priority values. Each activity is then started at the earliest possible time.
Because the activities, to be executed next, are selected from only the feasible activities,
as dictated by both the precedence, as well as the resource constraints, the priorities do
not have to reflect the precedence relationships. It has been shown by Kolisch (1996b)
that less than 60% of the problems in the PSPLIB j30 problem set have optimal solutions
that are non-delay schedules, which can be generated by the parallel schedule generation
scheme, as defined in Section 3.4.4.3.
When using schedule generation schemes as a basis for heuristic algorithms, it is de-
sired to have a parameter space that is continuous and contains the optimal solution. This
allows the algorithms to operate by applying arbitrary priority values to activities. Nei-
ther the serial, nor the parallel schedule generation scheme fulfill both criteria. For this
purpose a new schedule generation scheme was developed. The pseudo-serial schedule
generation scheme, yielding a complete and continuous representation of all active sched-
ules.
As the name suggests, the set of non-delay schedules do not include schedules where
an activity is executed later than the earliest point in time where the activity becomes
resource- and precedence-feasible. Hence, no activity can be delayed beyond this point.
The pseudo-serial schedule generation scheme is based on the parallel schedule genera-
tion scheme, extended by the concept of introducing delays. Delays are introduced into
the space of possibilities by first scaling the search space down to a range of priorities in
the interval [0, 1]. This does not have any effect on the resulting schedules as any given set
of priority values can be scaled to fit into that interval without changing the relative order.
Delays are then represented as integer values added to those priorities. When selecting
the activity with the lowest priority value Pj , its fractional part Rj is used instead. An
activity being delayed once, implies that it is ignored when it occurs as the next activity to
be scheduled for the first time. In this representation such an activity would be given an
interval in the range [1, 2]. After it was ignored once, the priority value is reduced to the
range [0, 1] by subtracting 1. When the same activity is selected for scheduling the next
time, it will no longer be ignored.
The entire procedure is summarized in Algorithm 3. Figure 4.1 illustrates the resulting
search space for a problem with two activities. The bottom left rectangle, where both
values V (A) and V (B) are in the [0, 1] interval contains all non-delay schedules that can
be generated by the parallel schedule generation scheme. The areas extending outwards in
the top and left directions contain schedules where at least one activity has been delayed.
The search space is generally bound to positive values.
Figure 4.2 shows an example problem that was introduced in Section 3.4.8.8 involving
five activities and three resources. As discussed in that Section, the only schedule that
can be generated by the parallel schedule generation scheme for this problem is the one
illustrated in Figure 4.3. Activity E2 can executed as soon as activity P1 has finished. As
4.1 Pseudo-Serial Schedule Generation Scheme 99
Figure 4.1: Search Space generated by the pseudo-serial schedule generation scheme for a
two activity problem. Each axis corresponds the the priority of one activity. The fractional
parts are priority values, the integer parts are delays.
Figure 4.2: The example problem from Section 3.4.8.8. Five activities using three differ-
ent resource types.
there is no other activity that can be started at that time, using the same resource, activity
E2 must be scheduled at that point by the parallel schedule generation scheme. However
the schedule illustrated in Figure 4.4 is the true optimal solution for the problem. In this
schedule activity E2 is not scheduled at the earliest possible point t1 , but later. It is delayed
beyond E2. This schedule can be generated by the serial schedule generation, giving E1
a lower priority value than E2. It can however also be generated by the pseudo-serial
schedule generation scheme by delaying E2 once, given the priority value of E1 is lower.
This implies that 2 > V (E2) > V (E1) + 1.
If the priority value of E2 would be lower than the one for E1, the delay would yield
4.1 Pseudo-Serial Schedule Generation Scheme 101
Figure 4.3: The only schedule that can be generated by the parallel schedule generation
scheme for the example in Figure 4.2.
Figure 4.4: The optimal solution for the problem in Figure 4.2. Compared to the solution
in Figure 4.3 this is not a non-delay schedule, but requires activity E2 to be delayed once,
which is illustrated by a red double arrow.
the schedule shown in Figure 4.5. This schedule could not be generated by either the
serial or the parallel schedule generation scheme as it is a delayed schedule, but E2 is not
executed at it’s earliest resource and precedence feasible time, given all activities with
lower priority values were already scheduled. If E2 would be delayed a second time, this
would again yield the schedule in Figure 4.4. If however E1 had a lower priority value
than E2, but E2 would be delayed twice, this would result in the schedule illustrated in
Figure 4.6, delaying activity E2 beyond activity F. Both the schedules in Figure 4.5 and
4.6 are neither active nor non-delay schedules and can under no circumstances be optimal.
Since both schedules are contained within the search space, and both schedules do
allow local left shifts of activities, it can be stated, according to the theory presented in
Section 3.4.4.1, that the pseudo-serial schedule generation scheme can generate a superset
of the set of semi-active schedules as defined by Sprecher et al. (1995). It can however not
generate the full set of feasible schedules as the time tg can never be incremented beyond
the finish time of the last activity. Therefore the algorithm can never generate a gap in
102 4 Improved Methods for Construction Scheduling
Figure 4.5: A solution for the problem in Figure 4.2 that delays activity E2 while E2 has a
lower priority value than E1. This schedule cannot be generated by either of the serial and
parallel schedule generation schemes. It can however be generated by the pseudo-serial
schedule generation scheme.
Figure 4.6: A solution for the problem in Figure 4.2 that delays activity E2 twice, while
E1 has a lower priority value than E2. This schedule cannot be generated by either of
the serial and parallel schedule generation schemes. It can however be generated by the
pseudo-serial schedule generation scheme.
which no activity is active at all. The resulting set of schedules could accordingly be
named the set of pseudo-active schedules. This extends the Lemma defined by Sprecher
et al. (1995) to Lemma 9.
Lemma 9. Let S denote the set of all schedules, SF the set of feasible schedules, SSA the
set of semi-active schedules, SS the set of active schedules, SP A the set of pseudo-active
schedules, and SN D the set of non-delay schedules. Then following holds:
SN D ⊆ SA ⊆ SSA ⊆ SP A ⊆ SF ⊆ S.
Figure 4.7 shows the convergence of the Water Wave Optimization (WWO) algorithm
discussed in Section 4.2 applied on the three discussed schedule generation schemes. The
graph was created using a population of 10 over 1000 generations. As a data set the
4.1 Pseudo-Serial Schedule Generation Scheme 103
50
PSGS
vi =1.1
40 vi =2
vi =3
vi =4
Dev. from opt. in %
30 SSGS
20
10
0
0 200 400 600 800 1000
Iteration
Figure 4.7: Convergence of serial (SSGS), parallel (PSGS) and pseudo-serial schedule
generation schemes on the PSPLIB 30 activity benchmark set. The Water Wave Op-
timization (WWO) algorithm was run using all 3 scheduling schemes. In case of the
pseudo-serial schedule generation scheme, the initial population vi was varied, illustrat-
ing the transition between the serial and parallel schedule generation schemes.
PSPLIB j30 benchmark set was used. The lines show the average deviation in percent
from the known optima of all 480 benchmark problems in the set. It is visible that the
pseudo-serial schedule generation scheme can be used to model the transition between
parallel and serial schedule generation scheme by adjusting the initial population. The
parameter vi defines from which area of the search space the initial population is sam-
pled. A value of 1 implies that the population starts with only non-delay schedules. As
a large part of the optimal solutions in the benchmark set are non-delay schedules, the
parallel schedule generation scheme performs best on average. It is visualized by the grey
dashed line. It can however be stated that for 76 of the 480 problems the solution found
by the pseudo-serial schedule generation scheme was superior. The initial population
was varied between vi = 1.1 and vi = 4. For the first case, the initial population con-
sists of almost non-delay schedules and therefore doesn’t deviate much from the parallel
scheduling scheme’s results. The further the parameter is increased however, the more
the results approach the performance of the serial schedule generation scheme, where the
initial population will always be sampled from the set of active schedules.
Schedule generation schemes generally form vital tools for heuristic algorithms solv-
ing resource constrained project scheduling problems. Compared to the commonly used
serial and parallel schedule generation schemes, the pseudo-serial adaption creates the
transition among both of those schemes, allowing to adjust the characteristics for the
desired purpose. Restricting the search space to the [0, 1] range results in the parallel
104 4 Improved Methods for Construction Scheduling
schedule generation scheme, while extending it into the positive direction causes it to
converge to the serial schedule generation scheme. It can however be possible to gen-
erate a problem where the number of delays for a certain activity, required to reach the
optimum is so high, that the convergence of the algorithm decreases strongly, because it
extends the search space to a superset of the set of active schedules. Nevertheless does the
algorithm provide a valuable addition to the available schedule generation schemes and
can aid improved results for many heuristic procedures. The runtime complexity of the
pseudo-serial schedule generation scheme is O(N 2 · |R|), hence equal to that of the other
schemes as presented in Section 3.4.4.
After each generation the wavelength of each wave x is updated as defined by Equa-
tion 4.2. fmin and fmax are the extreme fitness values of the current population of waves
and f (x) is the fitness value of the current wave x. α is the wavelength reduction coef-
ficient and is a very small number to avoid division-by-zero. This procedure simulates
the movement of waves throughout the search space.
If the height of a wave decreases to zero it is refracted based on Equation 4.3. N (µ, σ)
samples a value from the normal distribution with mean µ and standard deviation σ. x∗
is the best solution found so far. This component avoids the search to propagate towards
areas of low fitness.
4.2 Water Wave Optimization Applied to RCPSPs 105
f (x)
λ0 = λ (4.4)
f (x0 )
Whenever a wave x reaches a new optimal solution x∗ the wave breaks up into multiple
waves moving in multiple directions. A value k is randomly selected from the range
(k; kmax ), where kmax is a predefined maximum number of breaking waves. At each
randomly selected dimension d it generates solitary waves x0 based on Equation 4.5. If
none of the solitary waves is fitter than x∗ , x∗ remains as the new x0 , otherwise the fittest
of the solitary waves becomes the new x∗ and x0 . This component causes the vicinity of
new extreme points to be evaluated for fitter values.
For a more detailed description of the algorithm and its implementation, the reader is
referred to the original paper by Zheng (2015).
For the application of water wave optimization to resource constrained project schedul-
ing problems, the pseudo-serial scheduling scheme, defined in Section 4.1 was used. Ex-
periments showed that the algorithm performs best when the initial population is sampled
106 4 Improved Methods for Construction Scheduling
from the vicinity of the non-delay region, as shown in Figure 4.1, only. A comparison of
different initial populations is illustrated in Figure 4.7. A parameter surface for the popu-
lation size and the number of generations is illustrated in Figure 4.8. The algorithm was
thoroughly evaluated on the problems of the project scheduling problem library PSPLIB
by Kolisch & Sprecher (1997) and outperforms many other published algorithms.
3
2.5
2
Mean error
1.5
1
0.5
0 200
200 400
400 600
600 800 Iterations
Population size 800
1000 1000
Figure 4.8: Surface of parameter values. Horizontal axes show population size np and
number of iterations i. Vertical axis shows the mean error on the PSPLIB j30 problem
set.
Figure 4.9: Example Schedule to illustrate local optimality. Rectangles represent activi-
ties, fill colors represent resources, and arrows represent precedence constraints.
Figure 4.10: The schedule from Figure 4.9 after swapping activities B and D. The swap
increases the makespan of the schedule.
Figure 4.11: The schedule from Figure 4.9 after swapping activities A and C. The swap
increases the makespan of the schedule.
108 4 Improved Methods for Construction Scheduling
Figure 4.12: The schedule from Figure 4.9 after swapping activities A and C, as well as
B and D. Applying both swaps decreases the makespan of the schedule.
The swap based local search algorithm (Dori, 2016) introduced in Section 3.4.8.8
uses the concept of reasonable swaps and the constraint-based simulation, which is simi-
lar to the parallel schedule generation scheme. For the iterative swap-based limited depth
search algorithm however, the pseudo-serial schedule generation scheme, which was in-
troduced in Section 4.1 is used implicitly. As described in Section 3.4.8.8 the parallel
schedule generation scheme can be modified to collect possible swaps during the sched-
ule generation process. The pseudo-serial schedule generation scheme can additionally
collect possible delays in its process. Algorithm 5 shows the pseudo code for this process.
In contrast to Algorithm 3 in Section 4.1 lines 13-15 and lines 22-24 have been added.
Lines 13-15 check whether, by scheduling one activity, other activities became unfeasible,
hence, whether the set of eligible activities Er (g) reduced in size by more than 1. If that
is the case, each of the newly unfeasible activities can be executed instead of the activity
j, which was just scheduled. Hence all combinations with each of the newly unfeasible
activities x create a possible swap with the scheduled activity (j, x). Additionally lines
22-24 check whether there is more than one active activity in the set of active activities
for the next stage. If that is the case, all activities that have been executed in the current
stage can be delayed. It can however be possible that not all of them can be delayed
simultaneously, as there has to remain at least one active activity at each stage.
Figure 4.13 shows a tree created from the swaps and delays generated by Algorithm 5.
The root node on the top, with index 1, shows the initial schedule from Figure 4.9. Node
2 is generated by swapping activities A and C, while node 4 is generated by swapping
activities B and D. Nodes 3 and 5 are generated by delaying either activity B or C. Node
6 can either be reached by swapping B and D after node 2, or by swapping A and C after
node 4. Hence it would create a duplicate node in the search tree.
In order to be greedy with computational effort it is desired to avoid duplicate nodes
as described by Glover (1990b) in his work on tabu search. It does however require
vast amounts of memory to keep a history of all encountered schedules during the search
process. The C-sequence method and the reverse-elimination method introduced for this
kind of problem in Glover (1990a) however cannot be applied to reliably detect duplicate
schedules, although swaps and delays can be viewed as paired attribute moves. This is
4.3 Iterative Swap-based Limited Depth Search 109
Figure 4.13: Search tree of schedules based on swaps and delays generated using Algo-
rithm 5. The numbers in the upper right corner of each rectangle indicates the node index.
The top schedule is the initial schedule from Figure 4.9. The schedule is changed by ei-
ther a swap or a delay at each edge. Double arrowheads indicate delays. The red rectangle
indicates the optimal solution for the problem. As the depiction contains one undirected
cycle along nodes 1,2,4, and 6, it is not strictly a tree. However, node 6 would practically
only be reached through either node 2 or 4.
caused by the fact that the same schedule can be reached along different paths. It is
possible to reach the same schedule along different paths consisting of entirely different
operations. The schedule in Figure 4.14 can for instance be generated from the schedule
in Figure 4.9 by either delaying activity B, assuming the priority value for D is higher
than that for B, or by swapping activities B and D.
4.4 Evaluation on the Project Scheduling Problem Library (PSPLIB) 111
Figure 4.14: The schedule from Figure 4.9 after either delaying activity B, assuming the
priority value for D is higher than that for B, or after swapping activities B and D.
The idea of the iterative swap-based limited depth tree search algorithm is based on
the ascertainment that the number of swap or delay operations required to escape a local
minimum can be assumed to be low. The number of steps to be taken trying to find a
new optimum is the main parameter of the algorithm. The depth of search per iteration
defines to how many levels the search tree is built up. The best solution found within this
limited depth search tree forms a root node for the next iteration. The algorithm repeats
this process until the root node of the current iteration remains the best known solution.
Compared to search procedures with a fixed tolerance for accepting worse solutions, such
as the tolerance search by Dori (2016), this algorithm can escape even deep local optima,
given it can be escaped with a small enough number of steps. This makes sense because
applying a swap early within a schedule can create an entirely different schedule. Ex-
periments showed that the algorithm compares well to different published algorithms on
the problems of the project scheduling problem library PSPLIB by Kolisch & Sprecher
(1997).
experiments have been performed on each of the problem sets, ranking the presented al-
gorithms.
The detailed benchmark results are presented in 4 tables, one for each of the PSPLIB
problem sets. The rows labeled RL shows the results of the agent-based reinforcement
learning algorithm by Jedrzejowicz & Ratajczak-Ropel (2014), while the rows labeled
PSO show the particle swarm optimization results published by Zhang et al. (2005a), and
PSO2 refers to the particle swarm optimization algorithm by Chen et al. (2010). The rows
labeled GA list the results of the genetic algorithm presented by Alcaraz & Maroto (2001).
EA is the evolutionary algorithm by Valls et al. (2001) and GH is the greedy heuristic by
Goncharov (2011). HSS is the hybrid scatter search algorithm by Debels et al. (2006).
Rows labeled WWO PS were obtained by the water wave optimization algorithm, which
was presented in Section 4.2, operating on the pseudo-serial scheme, which was presented
in Section 4.1. The rows labeled ILDTS were obtained by using the iterative limited depth
tree search algorithm presented in Section 4.3. For each of the competing algorithms the
best performing parameter set has been selected out of the published results. For details
on the listed parameters, the reader is referred to the original papers. The columns labeled
ropt give the ratio of optimal solutions found for the j30 problem set, while the columns
labeled rbest give the ratio of best known solutions found for the other sets. The x̄ columns
give the mean relative error from the optimal or best known solutions. The columns t give
the mean processing time per solution and the nf columns give the number of fitness
function evaluations or schedule generations performed. Since the processing times were
obtained by the individual authors on different machines, they only yield a rough com-
parison of the run time efficiency, but due to missing nf values for some experiments, the
values were still included in the tables. The t values for the WWO results were measured
on an "Intel(R) Xeon(R) CPU E5-2630 @ 2.30GHz" machine running on a single thread
per experiment. All tables are sorted in increasing order of the mean relative error x̄. For
the WWO rows, np is the population size used and i is the number of iterations of the
algorithm that were performed. All other parameter values were set as recommended in
the paper by Zheng (2015). α = 1.0026, β is linearly decreasing from 0.25 to 0.001, the
maximum number of breaking waves kmax = min(12, na /2) where na is the number of
activities. The maximum wave height hmax = 6. Each table only lists the best results for
each algorithms, an overview of different parameter settings can be found in Appendix
B.1.
Table 4.1 shows the benchmark results for the j30 problem set. The hybrid scatter
search (HSS) algorithm by Debels et al. (2006) performed best on this problem set, but
also the two algorithms developed by the author compare well to other algorithms pub-
lished in literature. While the other algorithms perform better on the data, the run times t
of the agent-based reinforcement learning algorithm by Jedrzejowicz & Ratajczak-Ropel
(2014) are much higher. Same holds for the evolutionary algorithm by Valls et al. (2001).
The genetic algorithm by Alcaraz & Maroto (2001) and the particle swarm optimization
algorithm by Zhang et al. (2005a) converged faster, as they only needed half the num-
ber of fitness function evaluations nf in order to achieve lower error rates. The particle
swarm optimization algorithm by Chen et al. (2010) did not publish the number of fit-
ness function evaluations nf used, but the mean deviation was higher than most tested
configurations of the presented algorithm. When performing i = 1000 iterations on the
4.4 Evaluation on the Project Scheduling Problem Library (PSPLIB) 113
Table 4.1: Benchmark results for j30 problem set ordered by the mean deviation x̄ from
the known optimal solutions. The column ropt lists the ratio of optimal solutions found.
nf is the number of fitness function evaluations required per problem. t is the mean
processing time.
Table 4.2: Benchmark results for j60 problem set ordered by the mean deviation x̄ from
the best known solutions. The column rbest lists the ratio of best known solutions found.
nf is the number of fitness function evaluations required per problem. t is the mean
processing time.
a population size of np = 10000, using the WWO PS algorithm, the error is x̄ = 0.04%,
while in 97.71% of the problems the global optimum was found. This outperforms the
results of the PSO, EA, and GA algorithms. For comparison with the PSO and GA algo-
rithms two experiments with each nf ∼ 5000 schedules generated have been conducted
yielding a minimum mean error of x̄ = 1.23% which is slightly worse than the compared
algorithms. Generally it was observed that more than i = 1000 iterations only have a
small effect on the outcome, while the population size plays a more important role. This
was illustrated by a performance surface in Figure 4.8 in Section 4.2, which shows the
mean error over the space of the parameters population size and iterations on the j30
problem set. It shows that a certain number of iterations is required, after which a larger
population yields better improvements. The ILDTS algorithm performs slightly worse
than the WWO PS algorithm, finding 93.33% of the global optima with a mean error of
114 4 Improved Methods for Construction Scheduling
Table 4.3: Benchmark results for j90 problem set ordered by the mean deviation x̄ from
the best known solutions. The column rbest lists the ratio of best known solutions found.
nf is the number of fitness function evaluations required per problem. t is the mean
processing time.
Table 4.4: Benchmark results for j120 problem set ordered by the mean deviation x̄ from
the best known solutions. The column rbest lists the ratio of best known solutions found.
nf is the number of fitness function evaluations required per problem. t is the mean
processing time.
which are compared in Table 4.3. With a ratio of 71.67% of the best known solutions
found and a mean deviation of 1.61% the WWO PS algorithm delivers good results which
outperform the particle swarm optimization algorithm by Chen et al. (2010), but could
neither match the reinforcement learning algorithm by Jedrzejowicz & Ratajczak-Ropel
(2014) nor the evolutionary algorithm by Valls et al. (2001) in quality of the results. With
little drawback though, the presented algorithm can be configured to run much faster
than the other algorithms. Unfortunately the number of fitness function evaluations nf
required to obtain the results was not published for either of the other algorithms. The
ILDTS managed to find 65.42% of the best known solutions and deviated by 1.62% on
average.
Table 4.4 compares the results of the presented algorithms to the other algorithms on
the most complex j120 problem set containing 600 problems with 120 activities each.
While the genetic algorithm (GA) and the hybrid scatter search (HSS) gave outstanding
results, the other algorithms published very high run times for their results. In the best
configuration that was tested, np = 1000, i = 1000 the WWO PS algorithm was only
able to outperform the particle swarm optimization algorithm by Chen et al. (2010), but
with a mean error of x̄ = 5.48% still produced very good results. The ILDTS algorithm
falls back, with only 11.83% of the best known solutions found, and an avarage deviation
of 7.41%, given a search depth of d = 2. Experiments with larger search depths were not
feasible due to the immense number of possible swaps within the schedules.
While outperformed by some of the algorithms in literature, especially the hybrid
scatter search algorithm by Debels et al. (2006), the presented algorithms provide a valu-
able addition to the available arsenal of RCPSP heuristics. Especially the pseudo-serial
schedule generation scheme offers the possibility to be used with a wide variety of other
metaheuristic algorithms.
116 4 Improved Methods for Construction Scheduling
As an example, each activity’s duration was plotted for the j30u problem with instance
1 and parameter 1 in Figure 4.15. The resulting possible schedules are plotted in Figure
4.16. The figure shows the Monte-Carlo results for 100, 000 runs. Each plot shows the
likelihood of an activity being active at the respective time. The precedence constraints
of the activities are illustrated in Figure 4.17. Figure 4.18 shows the distribution of the
overall makespan over 1, 000, 000 runs.
In order to compare different algorithms on this extended benchmark set, a number of
eight objective functions have been designed, which are listed in Table 4.5. Each objective
is based on performing a certain number of Monte Carlo runs of a resulting schedule. O1
defines the first objective function of the minimum makespan of all the K Monte Carlo
runs. O2 defines the second objective as the mean makespan, while O3 defines the worst
5
The generated values have been published in the git repository hosted at https://github.com/tumcms/
upsplib to be used for comparison studies.
4.5 Uncertainty Extension for the Project Scheduling Problem Library PSPLIB 117
encountered makespan. O4 defines the standard deviation of the set of the Monte-Carlo
makespans. O5 defines the objective as the sum of the mean makespan and the standard
deviation. O6 defines the objective as the mean makespan added to twice the standard
deviation. O7 is the worst encountered makespan added to the standard deviation, while
O8 adds twice the standard deviation. O5 till O8 are multi-criteria objectives using a
weighted sum as presented in Section 3.2.2.4.
X mk
O1 = min(mk ) O2 =
k∈K
k∈K
|K|
s
X m2 X mk
k
O3 = max(mk ) O4 = −( )2
k∈K
k∈K
|K| k∈K
|K|
O5 = O2 + O4 O6 = O2 + 2 · O 4
O7 = O3 + O4 O8 = O3 + 2 · O 4
Table 4.5: The eight objective functions of the UPSPLIB problem library.
118 4 Improved Methods for Construction Scheduling
4 3 2
0 0 0
0 7 14 0 7 14 0 7 14
3 3 2
0 0 0
0 7 14 0 7 14 0 7 14
2 3 3
0 0 0
0 7 14 0 7 14 0 7 14
3 2 2
0 0 0
0 7 14 0 7 14 0 7 14
3 4
2
0 0 0
0 7 14 0 7 14 0 7 14
2 3 3
0 0 0
0 7 14 0 7 14 0 7 14
3
1 2
0 0 0
0 7 14 0 7 14 0 7 14
2 3 3
0 0 0
0 7 14 0 7 14 0 7 14
3 3
2
0 0 0
0 7 14 0 7 14 0 7 14
3 2 1
0 0 0
0 7 14 0 7 14 0 7 14
Figure 4.15: Uncertain durations of the 30 individual activities for the j30u problem with
instance 1 and parameter 1. Each activity is delayed using a beta distributed random vari-
able. The horizontal axes show the duration, the vertical axes the probability. Activities
are arranged from left to right, then top to bottom.
4.5 Uncertainty Extension for the Project Scheduling Problem Library PSPLIB 119
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
1 1 1
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Figure 4.16: Schedule for the j30u problem with instance 1 and parameter 1, as shown in
Figure 4.15, using the default activity order with the parallel schedule generation scheme.
Each plot corresponds to one activity and shows the probability of an activity being active
at a certain time. The plots are based on 100, 000 runs. Activities are arranged from left
to right, then top to bottom.
120 4 Improved Methods for Construction Scheduling
1 (0)
3 (4)
24 (3) 6 (8)
29 (7) 30 (2)
32 (0)
Figure 4.17: Dependency graph of the 30 activities for the j30u problem with instance
1 and parameter 1. The numbers in parentheses are the minimum durations. Activities 1
and 32 are added source and sink activities with duration 0.
4.5 Uncertainty Extension for the Project Scheduling Problem Library PSPLIB 121
150000
Number of occurrences
100000
50000
0
0 20 40 60 80
Duration
Figure 4.18: Uncertain duration of a schedule for the j30u problem with instance 1 and
parameter 1, as shown in Figure 4.15, using the default activity order with the parallel
schedule generation scheme. The vertical axis shows the number of occurrences out of
1, 000, 000 runs.
122 4 Improved Methods for Construction Scheduling
This Section contains three tables for the j30u problem set. Each table has one column
for each of the objective functions defined previously. Furthermore there are only four
rows for the first four objective functions, as the other functions are combinations of the
first four. The columns correspond to the optimization objectives that were used, and the
rows correspond to the output values. Each table was populated using simulation results
of the best solutions found, according to the respective objective functions, using the water
wave optimization algorithm with a population of p = 10 waves and n = 1000 iterations.
For each objective function evaluation, 10 Monte Carlo runs were carried out. The final
solution for each of the optimization runs was evaluated 10, 000 times in a Monte Carlo
way.
Table 4.6 shows the minimal deviations from the optimal solutions of the correspond-
ing j30 PSPLIB problem for the first three rows, while the last row shows the standard
deviation. The first row shows the minimum deviation of the best simulation runs for
each solution. The second row shows the mean deviation and the third row shows the
maximum deviation. Table 4.7 shows the same results, but the mean values of the 10, 000
runs, while Table 4.8 shows the worst solutions encountered during the Monte Carlo runs.
Looking at Table 4.7 showing the mean results several things can be noticed. The low-
est minimum makespan is achieved when using the minimum makespan O1 or the mean
makespan O2 as objective function. The lowest maximum makespan is achieved when us-
ing the maximum makespan O3 as an objective function. But similar results are achieved
when minimizing the sum of the mean makespan and the standard deviation (O5 ) or max-
imum makespan and the standard deviation (O7 ). Using the standard deviation O4 as an
objective function generally returned poor results. This can be traced back to the stan-
dard deviation creating a very discontinuous search space. When looking at the worst
case Table 4.8 is is visible that the objective function that minimizes both the worst case
makespan, as well as the worst case standard deviation is the sum of the mean makespan
and the standard deviation (O5 ). Additionally this objective function also acchieves the
lowest worst case and mean best case values in Table 4.6. More detailed results are pre-
sented in Appendix B.2 and the precise solutions have been published to github6 .
6
https://github.com/tumcms/upsplib
4.5 Uncertainty Extension for the Project Scheduling Problem Library PSPLIB 123
XXX
X Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
XX
O1 (min) 1.05 1.02 1.04 1.04 1.03 1.03 1.04 1.04
O2 (mean) 1.11 1.11 1.11 1.13 1.11 1.11 1.12 1.11
O3 (max) 1.16 1.16 1.16 1.21 1.16 1.18 1.20 1.18
O4 (stDev) 0.89 0.89 1.02 1.03 0.96 0.98 0.95 0.92
Table 4.6: Best solutions of water wave optimization on the uncertainty-extended PSPLIB
j30u set for the different objective functions O1 to O8 . The result measures have been
obtained through 10,000 Monte Carlo runs of the final solution.
XXX
XXX Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
O1 (min) 1.21 1.21 1.22 1.54 1.22 1.22 1.22 1.22
O2 (mean) 1.33 1.32 1.32 1.68 1.32 1.32 1.32 1.32
O3 (max) 1.49 1.47 1.46 1.85 1.46 1.47 1.46 1.47
O4 (stDev) 2.82 2.41 2.09 2.47 2.11 2.17 2.09 2.09
Table 4.7: Mean results of water wave optimization on the uncertainty-extended PSPLIB
j30u set for the different objective functions O1 to O8 . The result measures have been
obtained through 10,000 Monte Carlo runs of the final solution.
XX
XXX Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
X
O1 (min) 1.46 1.46 1.49 2.36 1.48 1.53 1.56 1.49
O2 (mean) 1.68 1.59 1.60 2.62 1.62 1.67 1.66 1.62
O3 (max) 1.91 1.97 2.11 2.77 1.88 2.09 2.14 2.06
O4 (stDev) 12.79 9.44 7.04 8.30 5.72 9.87 8.01 6.24
This Section illustrated that the uncertainty extension of the PSPLIB problem library
enriches the commonly used benchmark problems by adding uncertain delays to each
problem and therefore allowing to evaluate an algorithms performance for RCPSPs with
uncertain activity durations. The added objective functions allow the heuristic optimiza-
tion of different statistical characteristics of the problems. The inclusion of the schedule
durations’ standard deviation generates a more chaotic search space, creating an added
challenge for heuristic optimization procedures.
124 4 Improved Methods for Construction Scheduling
4.7 Summary
This Chapter introduced a new schedule generation scheme, the pseudo-serial schedule
generation scheme, that combines advantages of both the commonly used schemes, the
serial and parallel schedule generation schemes. It has been shown that the new scheme
can generate all active schedules, as does the serial scheme, but is computationally more
efficient and can be used to collect possible swaps and delays during schedule generation,
as does the parallel schedule generation scheme. Furthermore this Chapter introduced two
novel scheduling algorithms. The Water Wave Optimization (WWO) algorithm by Zheng
(2015) was applied to RCPSPs and the iterative swap-based limited depth search has been
developed. Both methods have been shown to compete with other algorithms from litera-
ture, however the WWO algorithm proved to be much stronger and computationally more
efficient in finding good solutions to the problems of the PSPLIB. Finally an extension
of the PSPLIB has been developed, adding uncertainty using beta-distributed delays to
each activity in each problem. All values have been published to allow comparisons to be
performed by other researchers. The extension allows scheduling algorithms to be eval-
uated on problems with uncertain performance factors and activity durations. It has been
shown that the presented algorithms can handle the extended problems and therefore can
be applied to real-life problems involving data acquisition with uncertainty in the context
of a reactive scheduling system.
125
5 State-of-the-Art in Data
Acquisition on Construction Sites
"Not everything that counts can be counted and not everything that can be
counted counts." - Cameron (1963)
Large construction projects are built over long periods of time, sometimes years pass
before the project is finished. As such projects consist of many individual activities it can
be difficult to maintain an overview over the state of all of them. Additionally construc-
tion projects are commonly subject to unforeseen events that affect the building process.
Therefore copious research was dedicated to data acquisition techniques for construction
activities. For the reactive scheduling approach, data feedback is a vital ingredient. The
acquired progress information can be used to update the model of the construction project
by updating performance factors and activity durations. This data is important to keep the
model of the construction project synchronized and therefore allow for reactive planning
and prognosis.
5.4 Photogrammetry
Ullman (1979) and Webb & Aggarwal (1982) developed an approach, called structure-
from-motion (SFM), that allows to create three dimensional point clouds from photographs.
Later this idea was developed further into a system called VisualSFM by Wu (2011). In
combination with the patch-based multiview stereo (PMVS) algorithm published by Fu-
rukawa & Ponce (2010) this method allows to generate very high quality colored point
clouds from unordered photographs of an object.
In order to generate such point clouds, a method for finding common features in im-
ages is required. For this purpose the Scale Invariant Feature Transform (SIFT) (Lowe,
1999) and the Speeded Up Robust Features (SURF) (Bay et al., 2006) algorithms are
most commonly used. Both algorithms create descriptions of local geometric features
within images that are invariant to scale and rotation. Furthermore they are robust against
changes in illumination, noise and slight deformation. Each feature can be compared to
other features in order to identify similar points in different images. Figure 5.1 a) and b)
shows two illustrative images of a mountain site. Figure 5.1 c) and d) shows how features
in the images can be mapped to each other. The blue dashed lines show the identified
features. This technique is commonly used for panorama stitching, but also finds use in
the SFM approach.
Figure 5.1: Matching of image features. a) and b) show two input images. The blue
dashed lines connect matching features in c) and d).
and their features have to be matched among them. Figure 5.2 illustrates that a feature
located in a single 2D image can not generally be located in 3D space (See Figure 5.2).
In order to make use of the motion of the camera in between multiple shots it is required
to locate the same feature points within the images. In theory this requires the features
2
of every image to be compared to the features of every other image, which requires n2
image comparisons for n images. For large objects where many images are required, this
can take substantial amounts of time. In order to reduce this effort, Silpa-Anan & Hartley
(2008) have developed an optimized matching algorithm based on the k-d tree algorithm
by Beis & Lowe (1997) and Lowe (2004).
Figure 5.2: The location of a feature in a single two dimensional image does not reveal
its location in three dimensional space.
If the same feature has been located in two different images while the positions where
the images were taken, and the parameters, as well as the orientation of the camera were
known, it is possible to locate the feature in 3D space. This is illustrated in Figure 5.3.
The problem becomes more complex if the camera positions are unknown. Depending
on whether the camera parameters are known and whether it is known that the same
camera setup was used for all recorded images, an initial image pair needs between six
and eight feature correspondences and each added feature requires between two and three
correspondences (Astrom et al., 1999; Hartley & Zisserman, 2002). Figure 5.4 illustrates
two feature correspondences in three images.
Small deviations in the detection of features and the quality of the recorded images
cause the features to not match up precisely. For this purpose the bundle adjustment algo-
rithm is used to find the optimal geometry minimizing those errors among the analyzed
images (Brown, 1976; Triggs et al., 2000). Repeating this procedure for several images
results in a point cloud of the recorded scenery.
Finally the density of the resulting point cloud can be increased by using dense stereo
reconstruction methods such as the patch-based multi-view stereo algorithm by Furukawa
& Ponce (2010). For an overview of such methods the reader is referred to the paper by
Ahmadabadian et al. (2013).
Tuttas et al. (2014) used photogrammetry to monitor the progress of construction
projects using building information models (BIM). The work was later extended by using
geometric constraints which were derived from BIMs in order to fill in missing informa-
5.4 Photogrammetry 129
Figure 5.3: The locations of a feature in two two dimensional images reveal its true loca-
tion, given the locations and parameters of both cameras are known.
tion (Braun et al., 2015a,b; Stilla et al., 2015; Tuttas et al., 2015).
A method for measuring excavated volume from photogrammetric point clouds of
excavation pits has been developed by the author and is presented in Section 6.1.2.
130 5 State-of-the-Art in Data Acquisition on Construction Sites
5.6.1 GPS
The global positioning system (GPS) is based on tracking devices which can be located
with a few meters of deviation. In 1995 when the GPS system was officially activated,
one of the first commercial use cases was the tracking of heavy agricultural machines
(O’Connor et al., 1995). This approach was first presented in a construction site envi-
ronment by Naresh & Jahren (1997), where trucks were equipped with GPS trackers. At
this point equipping a single truck with this technology created costs of several thousand
dollars and was therefore hardly feasible on a large scale. When the technology became
more widespread and costs dropped drastically, its use became more realistic and new
approaches were envisioned. Oloufa et al. (2003) presented a method to not only track
trucks, but also on-site equipment and additionally showed that the method can be used
to prevent collisions. Furthermore Li et al. (2005) showed that the technology can reduce
waste, improve efficiency, and reduce costs as a result. The idea proceeded to also cap-
ture the positions of materials on the site, as presented by Song et al. (2005). Pradhananga
& Teizer (2013) presented methods for detailed spatial-temporal analysis of the obtained
data, to improve efficiency of the on-site operations even further. All those technologies
are widely used in many areas today. However, since the costs of GPS trackers still aren’t
negligible, the technology still isn’t efficient for material tracking, especially for small
parts.
5.6.2 RFID
tiny form factor. If they are activated by surrounding radio waves, they broadcast their
hardcoded ID which can then be received by a reader device. This allows to register RFID
tags in the area surrounding a reader. Jaselskis et al. (1995) presented different possible
applications of RFID on construction sites. Similar to GPS, a truck can be equipped with
a RFID tag, but in comparison it cannot be continuously tracked with this technology,
but the times it passes certain points, referred to as gates, can be logged. This would for
instance be the times it enters and leaves a construction site. Furthermore RFID tags can
be cast into prefabricated concrete parts, whose entrance onto the site can then be logged.
Moreover all tagged and placed parts on the site can easily be verified by a walk-through
using a RFID reader device, logging all surrounding tags (Song et al., 2005). Costin
et al. (2012) developed this technology further into a system that automatically reports the
progress of ongoing activities based on the location of RFID tags placed on materials and
personnel. Kiziltas et al. (2008) analyzed the practical implications of this technology.
They also discuss the active variant of RFID, which is commonly implemented using
ultra-wideband technology. Klaubert & Günthner (2011) presented a holistic RFID-based
approach for construction information and communication systems.
Compared to passive RFID, ultra-wideband (UWB) technology is active. The UWB de-
vice sends radio waves to allow a receiver infrastructure to track its location. Compared to
passive RFID, UWB allows precise location tracking in three dimensions. Furthermore,
it is more precise than GPS and works in occluded areas, such as indoor spaces (Teizer
et al., 2007b). As materials are often stored in indoor storage places, GPS tracking is not
possible in those cases. The system has been used for performance measurements (Cheng
et al., 2011; Saidi et al., 2011), resource tracking (Teizer et al., 2007a), and personnel
tracking (Sahinoglu, 2008).
One approach to visual tracking, which is similar to the previously explained method of
3D scanning, is range imaging. A range image is a two dimensional image, that is en-
hanced by adding depth information to each pixel. There are different methods to generate
this depth information. The best known system is the use of stereo triangulation, which is
also used by the human vision apparatus and accordingly for recording 3D movies. An-
other common technique is the use of structured light, projecting a special light pattern
132 5 State-of-the-Art in Data Acquisition on Construction Sites
onto the scene which can be used to retrieve depth information. The most advanced meth-
ods use a coded aperture (Levin et al., 2007) or so-called time-of-flight (ToF) methods
(Oggier et al., 2004). Time-of-flight methods create a single pulse of light and measure
the time it takes to be reflected from the different surfaces in the scene. As ToF works in
real-time and is very precise it can be used for real-time tracking of resources on site.
Teizer et al. (2007a) presented the use of range imaging for maintaining a occupancy
grid of a construction site, while Kahlmann et al. (2007) presented its use for personnel
tracking. The idea was extended towards the creation of as-built building models by
Bosche & Haas (2008). Teizer (2008) and Gonsalves & Teizer (2009) proposed the use
for safety analysis and human motion analysis. An approach for 3D object detection for
heavy equipment tracking was introduced by Son et al. (2010). Ng et al. (2005) describe a
new technology, the plenoptic camera, which records images or video in three dimensions
using a single camera. As the resolution of such cameras is still limited and the costs are
very high, they were not yet subject to research in the area of construction tracking.
Generally the use of range imaging techniques is cost intensive and creates large
amounts of data that are hard to process automatically. Therefore two-dimensional imag-
ing techniques are still in the focus of researchers.
Many processes on construction sites can be monitored visually. A human viewing the
site all day from a vantage point, would be able to monitor most processes. Especially
in the shoring and excavation phase, all involved personnel and machines are mostly vis-
ible. Therefore computer vision-based techniques are of great interest for monitoring
those processes automatically. The use of computer vision techniques on video record-
ings has a long history of research. In the seventies, when computers first became capable
of processing video sequences Flachs et al. (1976) and Flachs et al. (1977) presented an
automatic object tracking system that formed the basis of decades of following research.
Later they managed to run the tracking system in real-time (Gilbert et al., 1980). Despite
those early advances in video tracking technology, there are still many challenges subject
to ongoing research. Song et al. (2006) published an approach to video based tracking on
construction sites to both measure and improve the performance of construction opera-
tions. Teizer & Vela (2009), Yang et al. (2009) and Yang et al. (2010) present an approach
for automated video-based personnel tracking. A possible approach to assess the pro-
ductivity of construction operations is to identify the status of different activities by the
postures of involved workers (Ray & Teizer, 2012). Furthermore the tracking of resources
and materials can be subject of video monitoring approaches (Gong & Caldas, 2010). On
many construction sites tower cranes play a key role. Yang et al. (2011) presented an
approach for vision-based crane tracking for the analysis of construction activities. An-
other common key activity of construction projects are earthwork processes. Ogunmakin
et al. (2013) presented a method using video cameras to quantify interactions between
excavators and dump trucks. A holistic system, including video based monitoring of the
construction activities was presented by Leung et al. (2008).
Section 6.2 presents a novel method, that is based on video-based tracking, as well as
photogrammetry.
5.8 Summary 133
5.8 Summary
This Chapter presented an overview of the state-of-the-art of progress tracking technolo-
gies that can be used on construction sites. While manual protocol-based tracking is still
widely used in practice, it is also possible to automatically record machine data and ex-
tract information about the executed activities from that data. It has been shown that this
information can be used to calculate performance factors for the involved resources. Fur-
thermore laser scanning and photogrammetry have been proven effective in generating
point clouds with sufficient quality to detect construction elements and measure the as-
built performance. Additionally video-based approaches were presented, and have been
shown to be effective in tracking construction assets. The presented methods have not
been used to monitor excavation processes by obtaining absolute measures. While there
are methods to count the number of dump trucks entering and leaving a site, the exact
volume of excavated soil is a more precise measure. The following Chapter introduces
improved vision-based methods for this purpose.
134 6 Improved Vision-based Progress Monitoring of Excavation Processes
erated point clouds. As a result up-to-date performance factors for the involved processes
can be calculated.
As described in Section 5.3 laser scanning can be used to generate three dimensional
point clouds. The laser scanner is placed near the excavation pit to record a laser scan.
Due to the concave shape of an excavation pit, a single laser scan can hardly capture
the full surface of the pit, as is illustrated in Figure 6.1. Additionally machines, such as
excavators and dumpers commonly operate within the pit and create occluded areas, as
illustrated in Figure 6.2. Those problems are commonly solved by performing multiple
laser scans and register the resulting point clouds to stitch them into one (Makadia et al.,
2006). This stitching operation can be time intensive and create errors (Rabbani et al.,
2007). In other scenarios the stitching is favored by marker points that are placed in the
scene (Bornaz & Rinaudo, 2004). In case of excavation sites this is troublesome due to
the constant change of the excavation pit. Additionally, laser scanning requires trained
personnel and expensive equipment (Flood, 2001).
Figure 6.1: Laser scanning an excavation pit. The laser scanner is placed in the right of
the image, illustrated by a gray square. The blue area shows the field of view of the laser
scanner. Due to the shape of the pit, the scanner can hardly capture the entire soil surface.
6.1.2 Photogrammetry
Figure 6.2: Laser scanning an excavation pit. The laser scanner is placed in the right of
the image, illustrated by a gray square. The blue area shows the field of view of the laser
scanner. Due to the shape of the pit and an excavator occluding parts of the area, the
scanner can hardly capture the entire soil surface.
areas not visible from any point outside of the pit. Additionally photos from inside the pit
can be taken.
Figure 6.3: The operator’s perspective, looking into the excavation pit, while taking pho-
tos for photogrammetric point cloud generation. The ten blue rectangles represent the
photos taken by the operator. The photos need to overlap such that they share common
features.
Figure 6.6 shows a practical example where photos haven been taken surrounding the
pit and additional pictures were taken from inside the pit. Furthermore the pictures il-
lustrated in the lower right were taken from an elevated angle off a tower crane. After
the images have been processed by the structure from motion algorithm VisualSFM (Wu,
2011), bundle adjustment (Wu et al., 2011), and the patch-based multi-view stereo algo-
rithm (Furukawa & Ponce, 2010) the dense point cloud representation shown in Figure
6.7 was generated.
6.1 Excavation Tracking using Point Clouds 137
Figure 6.4: Taking photos of an excavation pit for photgrammetric point cloud generation.
The darker area in the center is the excavation pit. Each blue triangle shows the orientation
of a photo that was taken. The operator moves around the pit taking photos of all currently
visible areas of the pit.
Figure 6.5: Cutting plane of excavation pit while taking photos for photogrammetric point
cloud generation. Each blue triangle shows the orientation of a photo that was taken. At
each location the operator takes photos of all currently visible areas of the pit.
After a point cloud is generated by the algorithms described in the previous Section and
in Section 5.4 it contains noise and includes unwanted geometry, created by machines,
as well as structures, and buildings surrounding the excavation pit. Furthermore the in-
dividual points of the cloud need to be post-processed in order to get the desired volume
measure.
6.1.3.1 Clustering
As a first step the point cloud is cleaned from unwanted points outside the region of inter-
est. This is performed using conditional euclidean clustering where points are combined
into different clusters, similar to the approach by Trevor et al. (2013). Each point that is
138 6 Improved Vision-based Progress Monitoring of Excavation Processes
Figure 6.6: Camera positions of photos taken on an actual construction site. The circular
area is the excavation pit and each pyramid represents a photo that was taken.
added to a cluster needs to be within a certain euclidean distance to another point within
the cluster. The resulting clusters differ in size and the main cluster which forms the ex-
cavation pit can be identified as the largest one. The maximum distance of a point to a
cluster is a parameter for this operation, but can be determined automatically based on the
overall size of the point cloud (Shi et al., 2011). All other clusters are removed from the
point cloud. The result of this process is shown in Figure 6.10. A survey of other possible
approaches has been published by Nguyen & Le (2013).
6.1.3.2 Cleaning
As a next step the point cloud is analyzed by generating a vertical histogram of the points
contained in each layer of the point cloud. The thickness of the layers only has a small
effect on the results, but should be sufficiently large to have points in every layer, but suf-
ficiently small to get the desired histogram information. Taking between 10 and 50 slices
over the full vertical range of the point cloud, depending on the depth of the excavation
pit, has proven to be a good measure. Figure 6.9 shows the histogram for the point cloud
in Figure 6.8. The plot shows that the majority of the points lie in the range between the
elevations 5 and 36. The large spike around an elevation of 30 indicates the bottom of the
excavation pit where most points are located. In case of a non horizontal ground plane,
this spike would be less obvious. The other end of the plot, where the number of points
6.1 Excavation Tracking using Point Clouds 139
Figure 6.7: Point cloud generated from photographs illustrated in Figure 6.6.
Figure 6.8: Raw unprocessed point cloud. Structures outside the excavation area have
been included into the point cloud.
140 6 Improved Vision-based Progress Monitoring of Excavation Processes
flats out at an elevation of 5 marks the upper end of the excavation pit. This mark can be
used for the volume calculation in the following Section. The point cloud resulting from
this procedure is illustrated in Figure 6.10.
40000
30000
Number of points
20000
10000
0
0 10 20 30 40 50
Vertical axis
Figure 6.9: Vertical histogram of point cloud. The graph shows the number of point on
the different elevations in the point cloud. The bottom layer within the point cloud is
clearly visible as the spike at around an elevation of 30. The surrounding ground layer
can be estimated to be at the left of the graph where the histogram flats out.
In order to calculate the volume of the excavated soil, the correct size of the point cloud
has to be known. When the point cloud was generated by a laser scanner, the size is
known. If however the point cloud was generated by photogrammetry, it needs to be
scaled to the correct size. This can either be done manually by taking a known measure,
such as the length of the excavation pit, an included bored pile wall, or automatically by
placing marker points at known locations. The markers can be located within the point
cloud and the scale of the cloud can then be determined by comparing their distance in
the real world with their distance in the point cloud.
Once the scale of the point cloud is known, there are different ways to calculate the
volume of the excavation pit. The most intuitive way is the create the convex hull of the
point cloud. This can be done using the quickhull algorithm by Barber et al. (1996). In
most cases the convex hull is larger than the actual volume, especially with irregularly
shaped excavation pits, such as the u-shaped pit in Figure 6.11, the error is significant and
cannot be neglected. Figure 6.12 shows the convex hull of the point cloud in Figure 6.10.
6.1 Excavation Tracking using Point Clouds 141
Figure 6.10: The point cloud from Figure 6.8 after cleaning. Points outside the excavation
area have been removed.
Figure 6.11: The convex hull of an u-shaped excavation pit would have a much higher
volume than the excavation pit. The white area is the actual excavation pit, the orange
area is the convex hull.
Figure 6.12: Convex hull of the point cloud in Figure 6.10. The hull was created using
the quickhull algorithm by Barber et al. (1996).
Figure 6.13: Meshed surface of the point cloud in Figure 6.10. The surface was generated
using the Poisson surface reconstruction algorithm by Kazhdan et al. (2006).
Figure 6.14: Meshed surface of the point cloud in Figure 6.10. The surface was generated
using the the ball pivoting algorithm by Bernardini et al. (1999). The red circle shows a
hole that was left open.
Based on the observation that the walls of an excavation pit hardly allow any overhang,
6.1 Excavation Tracking using Point Clouds 143
because overhanging soil would cave in, a more specialized algorithm was developed7 .
The elevation of the surrounding soil is known due to the histogram analysis described in
Section 6.1.3.2.
Under the assumption of walls without overhang the point cloud can be reduced to its
ground layer by removing all vertical duplicates of points, except the lowest ones. De-
pending on the density of the point cloud a tolerance can be defined in order to determine
which points cover each other. Once all vertical duplicates have been removed, the point
cloud is reduced to a ground layer as shown in Figure 6.15.
Figure 6.15: The point cloud from 6.10 after all vertical duplicate points have been re-
moved.
Since the ground layer could be irregular or have gaps, it can be homogenized and
thinned out. First of all all gaps are linearly interpolated, creating a refined ground layer
as is shown in Figure 6.16. Then the cloud can be thinned out to a lower density with
points at regular distances as is shown in Figure 6.17. The ground layer can then be
triangulated into a mesh such that it forms a set of triangles. Each of those triangles is
then extruded towards the elevation of the surrounding soil which was determined by the
histogram analysis. Each triangle creates a prism of which the volume can be calculated.
The sum of all prism volumes adds up to the sum of the excavation pit.
The described method was successfully tested on multiple construction sites to deter-
mine the excavated volume at multiple time steps. Detailed results are described in the
context of multiple case studies in Section 8. Situations where the surrounding ground
is very uneven or at a steep slope might cause the results to decrease in quality. Such
situations need further field studies in order to be evaluated.
7
The idea for this algorithm was developed in close cooperation with Benjamin Steer in the context of his
interdisciplinary project "Processing of Point Clouds of Excavation Sites to obtain Progress Information".
144 6 Improved Vision-based Progress Monitoring of Excavation Processes
Figure 6.16: The point cloud from Figure 6.15 after refinement to fill holes.
Figure 6.17: The point cloud from Figure 6.16 after resampling to reduce the number of
points to be evenly distributed.
During a comparison study of laser scanning and photogrammetry, several key differences
were determined. The laser scanner needs fewer accessible points on the site, and modern
scanners can capture the point clouds in less than a second (Kusnoto & Evans, 2002). The
laser scanner however is an expensive piece of equipment and requires trained personnel.
For instance, a Leica ScanStation P40 costs more than 100.000EUR. Furthermore it can
be sensitive to weather conditions, such as rain. As a single laser scan can rarely capture
the entire excavation pit, the resulting point clouds need to be stitched. This requires
manual work or the placement of marker points on the site. Additionally laser scanning
is, due to the fewer perspectives that are captured, more sensitive to occlusions as shown
in Figure 6.2. Photogrammetry on the other hand only requires a standard digital camera
6.2 Video-Based Tracking of Excavation Processes 145
and a non professional operator. In the performed field studies, a basic explanation of
the scheme described in Section 6.1.2 sufficed for the personnel to take the right photos.
The approach however requires the area around the excavation pit to be accessible. Large
spans around the pit that are inaccessible can cause the structure from motion approach
to fail. Furthermore puddles of water in the excavation pit create errors. If they are small,
they can be filled in using the approach described in Section 6.1.3.3. Another problem
that may occur is snow. If the pit is covered in snow, the feature extraction of the structure
from motion algorithm will not be able to gather sufficient data to triangulate a dense
point cloud. In summary these limitations did not create major problems during the case
studies.
Aerial
images Excavated
Analysis & control
Photogrammetry
Construction Site
Ground volume
images
Video
Video Activity
analysis statistics
Site layout
Figure 6.18: Overview of the system combining the presented photogrammetry approach
with the video analysis system by Ogunmakin et al. (2013).
8
This synergetic approach was developed in close cooperation with Gbolabo Ogunmakin, Patricio A.
Vela from the Georgia Institute of Technology, André Borrmann, and Jochen Teizer.
146 6 Improved Vision-based Progress Monitoring of Excavation Processes
Figure 6.19: Dump truck state estimates for a video segment of 6 minutes duration and
the activity states in a pie chart (Bügler et al., 2016).
The synergetic approach was successfully tested on two large excavation sites as de-
scribed in detail in Section 8. The main problem of the photogrammetry approach were
missing measurements due to inclement weather. Though, due to the absolute measure-
ments obtained by the volume calculation, the missing data points could be interpolated.
Another problem is that the processing of the image data to generate the point clouds
6.3 Summary 147
and calculate the excavated volume caused a delay of about half a day. The same prob-
lem occurred with the video analysis, where both real-time transfer and processing of the
video data was not possible. On the upside, obtaining measurements of the previous day
in the morning of the next day was feasible and sufficient for decision makers to react
to observed problems. Errors in the video processing were mainly caused by temporary
occlusions such as machines moving in front of each other. This can be addressed by plac-
ing multiple cameras at different locations. Generally, even when occlusions caused high
error rates in the video analysis, the acquired measurements still correlated with the ac-
tual activity and provided accurate estimates of the performance. Future research should
address the use of multiple cameras in conjunction and speeding up video processing to
near real-time.
In summary, the video analysis approach adds both temporal information as well as
resource attribution to the data acquisition approach. In a reactive scheduling system
where delays need to be attributed to processes or involved resources and the respective
performance factors it is vital to obtain accurate metrics on the progress of the involved
processes as well as to determine reasons for observed delays. The photogrammetry ap-
proach in combination with the the video analysis approach proved effective for this pur-
pose.
6.3 Summary
This Chapter introduced two novel methods for the monitoring of excavation processes.
Photogrammetry is the creation of point clouds based on photographs taken of an object.
This approach has been used to generate point clouds of excavation pits of which the
volume could be calculated afterwards. While this process showed to be sensitive to
reflective surfaces and snow, field studies showed that it is a viable approach to monitor
the progress of excavation processes.
However it is not possible to detect underlying reasons for observed delays as the
capturing of such point clouds is only feasible with low temporal resolution. In order
to address this issue a second visual approach, video analysis, was introduced. This ap-
proach allows to quantify the interactions between excavators and dump trucks, measure
idle times and count the number of shovel loads and dump trucks entering and leaving
the site. Combining the two sources of information allows to determine the absolute
productivity of the excavation process as well as determining reasons for periods of low
productivity.
This process results in performance factors that can be updated in regular intervals.
Additionally the variations on productivity can be quantified in terms of uncertainty. Both
the performance factors and the probability distributions they are subject to can be used
to update a reactive simulation model and produce updated schedules taking the newly
arisen circumstances into account.
148 7 Data-driven Reactive Scheduling
"For every action, there is an equal and opposite reaction." - Newton et al.
(1850)
This thesis presents the concept of reactive scheduling in the domain of construction
projects consisting of excavation and shoring operations. The presented methods in com-
bination form a synergic approach that is in focus of this Chapter.
7.1 Concept
The overall concept of the data-driven reactive scheduling approach presented in this the-
sis is illustrated in Figure 7.1. First, a model of the construction project is created based
on the specific requirements of the project. This process is described in detail in Section
2.6. The modeling can be done on the basis of process models as discussed in Section
2.5. During this step, the right choice of granularity as described in Section 2.4 and the
inclusion of uncertainty as described in Section 2.2 have to be considered. Including un-
certainty into the model, given the required data is available, allows for more accurate
predictions and can minimize the amount of re-planning required. The model also needs
to include the required resources, precedence constraints and other required elements as
described in Section 2.6.2. For this process Building Information Models can be utilized
as described in Section 2.6.3 to extract elements that need to be built as well as precedence
relationships among them.
Once a model has been created it can be used to generate schedules taking all specified
circumstances into account. Since not every schedule will meet the requirements of the
planner, a metric on how well a schedule suits the purpose is required as described in
terms of objective functions in Section 3.2. The most commonly used metric in research
is the makespan of the entire project, however in real projects costs, resource utilization
and robustness (Section 3.2.1) commonly form a multi-objective optimization problem as
is subject to Section 3.2.2.
After an objective has been defined, schedules can be generated as described in Sec-
tion 3.4. As the underlying Resource-Constrained Project Scheduling Problem (RCPSP)
is considered NP-Hard, heuristic algorithms are of high interest. This thesis introduced
two new heursitic methods for optimizing schedules. The iterative limited-depth local
7.1 Concept 149
Figure 7.1: Flowchart of the overall concept of the reactive scheduling approach. An
initial schedule is built on the basis of a model of the construction project. Once construc-
tion has started, real-time data is acquired. If construction is not yet finished, the model
is updated and it is verified whether the float time of an activity was exceeded. If that is
the case an updated schedule is generated. This process is repeated until construction has
finished.
search algorithm in Section 4.3 and the adaption of the water wave optimization meta-
heuristic optimization algorithm by Zheng (2015), presented in Section 4.2. Both algo-
rithms can utilize the novel pseudo-serial schedule generation scheme, which in turn is
described in Section 4.1. This scheme combines the advantages of both common parallel
and serial schedule generation schemes. Both presented novel scheduling algorithms can
be used to generate an initial schedule based on the developed model as well as update
the schedule once new information has been acquired. Updating a schedule is in essence
the same process as generating a new schedule, except that all activities that have already
been started in the past are locked into place during a schedule update. Furthermore no
new activities can be scheduled into the past.
On basis of the initial schedule the construction project is started. During the course
of construction, data is acquired for the individual activities, updating the model reac-
tively. Excavation activities can be monitored using the presented photogrammetry and
video-analysis approaches in Sections 6.1.2 and 6.2. As a result, delays of activities are
observed and can furthermore give rise to the underlying performance factors of the in-
volved machines as presented in Section 2.1. Performance factors are generally defined
as the amount of work performed over time. The performance factors as well as the ac-
150 7 Data-driven Reactive Scheduling
tivity durations are commonly subject to uncertainty which can be estimated based on the
collected data as described in Section 2.2. If for instance an excavator does not perform
as expected because the soil conditions are different than initially measured, the perfor-
mance factor of the machine is decreased. Therefore activities involving the excavator
will be updated by extending their duration respectively.
Once real-time data has been acquired the updated performance factors and accord-
ingly estimated activity durations are verified for exceedance of the float times which can
be calculated using the Dori Algorithm (DA) which is introduced in Section 3.4.6. If the
float time of an activity is exceeded, a re-planning process is initiated, generating a new
schedule based on the newly arisen circumstances. If the uncertainty in the activity dura-
tions has been estimated in terms of probability distributions as discussed in Section 2.2
Monte-Carlo simulation as described in Section 2.2.6 can be used to obtain metrics on
the effects of the uncertainty on the resulting schedules. Furthermore those metrics can
be used to generate particularly robust schedules minimizing the amount of re-planning
required in the future.
Figure 7.2: Example of a simple construction schedule. The excavation activity A requires
no shared resources, while the shoring activities B through E share the same resource, of
which only one instance is available. Parallel execution of any of the activities B, C, D,
and E is hence prohibited.
Applying the Dori Algorithm (DA) described in Section 3.4.6 provides the informa-
tion that all activities in the schedule have a float time of zero and hence are critical.
7.2 Illustrative Example 151
Figure 7.3: The schedule from Figure 7.2 after a delay of activity A has been observed.
The delay propagated through the remaining schedule because the float time of activity A
has been exceeded. Activities D and E are now scheduled to be executed later.
Figure 7.4: The schedule from Figure 7.2 after a delay of activity A has been observed,
and since the float time was exceeded, a new schedule was generated. Now activity E is
executed during the delay of activity A.
Therefore every delay in the schedule will propagate through the entire remaining sched-
ule and increase the overall makespan by the amount of the delay. It is assumed that
activity A is monitored, and prior to its finish, it is expected that it will take longer than
anticipated.
This expected delay can be determined on the basis of the photogrammetry-based
monitoring approach presented in Section 6.1. The duration of such an excavation activ-
ity can be a product of the performance factors of the involved excavators and dumpers.
When the measured progress falls behind the expectations, the video-based monitoring
approach described in Section 6.2 helps to attribute the delay to either the involved exca-
vators or dumpers. If the decision makers react by deploying additional resources, those
can be accounted for in terms of updated performance factors, when calculating the new
duration of the activity. When no or insufficient counter measures are taken, the result is
152 7 Data-driven Reactive Scheduling
Figure 7.5: The resource profile for the green resource of the schedule in Figure 7.2.
Figure 7.6: The resource profile for the green resource of the schedule in Figure 7.3.
Figure 7.7: The resource profile for the green resource of the schedule in Figure 7.4.
Figure 7.6 shows the same resource profile for the delayed schedule from Figure 7.3.
The delay prevents activity D from execution and therefore creates a gap in the resource
profile.
7.3 Summary 153
The resource profile in Figure 7.7 corresponds to the reactive schedule from Figure
7.4. It is visible that the resource gap has been closed. In many cases optimizing a sched-
ule with respect to short makespan will also yield a lean resource profile. Additionally
the costs for hiring the resource could be included as an optimization objective, creating
more costs for the schedule containing the gap. However at a certain duration of the gap it
may become worthwhile to make the resource available to other construction sites in the
meantime.
When taking robustness into account, the schedule shown in Figure 7.4 would have
been a better initial schedule as activity A has a float time equal to the duration of activity
E and the observed delay would not have had any effect on the schedule. This illustrates
that scheduling as an multi-objective optimization problem is more practical. Taking
robustness in terms of maximizing float times into account, so for instance using the sum
of all float times as a second objective, would have selected the final schedule as the initial
schedule, as both have equal makespans.
Another possible approach would be to take uncertainties in the activity durations or
the underlying performance factors into account. If it would have been known that activity
A is likely to be delayed, an approach such as Monte-Carlo simulation as described in
Section 2.2.6 would have also shown the schedule in Figure 7.4 as the better initial option.
However, when it is more likely that the sum of the delays of activities B and C is larger
than the delay of A, the schedule in Figure 7.2 would have been an equally good choice.
Since it is very hard to manually make such considerations in case of more complex
schedules it is advantageous to include uncertainty into the project model beforehand,
given the required data is available. When only coarse data is available, methods like
VIBES (AbouRizk et al., 1991), as presented in Section 2.2.1, have been shown to be
effective in estimating the required probability distributions. If detailed sample data is
available from previous projects or other activities, goodness-of-fit tests, as described in
Section 2.2.5 are useful for finding the most suitable probability distribution. Once a
distribution is selected, its parameters can be updated based on newly acquired data in
order to aid the reactive scheduling approach.
7.3 Summary
This Chapter showed how the methods presented throughout this thesis interlock into a
synergic approach. It has been described how the overall process of creating a data-driven
reactive scheduling model for construction projects works and an illustrative example has
been presented. The example illustrated how a delay can cause an optimal schedule to
become suboptimal.
Furthermore it showed how anticipation of delays can prevent the necessity of re-
planning. This can be realized by taking robustness into account in terms of an multi-
objective optimization approach during the generation of the initial schedule. Further-
more, if the uncertainty of activity durations or performance factors is known, performing
Monte-Carlo simulation in order to evaluate the schedule during generation is a viable
154 7 Data-driven Reactive Scheduling
8 Case Studies
This Chapter presents experimental results obtained through case studies that have
been performed on real construction sites. In order to illustrate the feasibility of the
approaches presented in Sections 6.1.2 and 6.2, three case studies have been performed
on different large real-life construction sites. This Chapter describes each of the cases and
discusses the data that was collected and evaluated.
Figure 8.4 shows performance factors that have been calculated based on the volume
measurements. The blue bars attribute the change of each day to the day of measuring
the difference. The red bars are interpolated, assuming constant change has happened
since the last measurement. Figure 8.5 shows the probability density function of a beta
156 8 Case Studies
Figure 8.2: Time series of point clouds of the office building excavation site.
12000
10000
Excavated Volume in m3
8000
6000
4000
2000
0
0 2 4 6 8 10 12
Time in days
distribution that was created based on the sample data. This distribution could be used to
make forecasts on the future performance of the excavation process. Both goodness-of-
fit tests presented in Section 2.2.5 confirm that this probability distribution fits the data.
The mean excavation performance was 1227.5m3 /day. Moving the excavated soil out of
8.1 Office Building 157
the pit was particularly easy due to the ramp, that is visible in Figure 8.1 on the bottom,
leaving the pit towards the right.
5000
Measurements
Interpolation
4000
Excavated Volume in m3
3000
2000
1000
0
1 2 3 4 5 6 7 8 9 10 11 12
Time in days
Figure 8.4: Performance factors calculated based on the volume calculations in Figure
8.3.
This case study illustrated the capabilities of the photogrammetry based progress mon-
itoring on a rectangular construction site with a large surface area. Furthermore the cal-
culation of performance factors was demonstrated and the uncertainty can successfully be
described using a beta distribution. The resulting data can be used to maintain synchro-
nization of a reactive simulation model that updates construction schedules based on the
newly obtained information.
158 8 Case Studies
1.5
Probability
0.5
0
0 1000 2000 3000 4000
3
Excavated volume in m /day
Figure 8.5: Beta distribution estimated based on the performance factors in Figure 8.4
with α = 1.4427 and β = 2.7839. The mean performance was 1227.5m3 /day.
8.2 Underground Parking Garage 159
Figure 8.6: Point cloud of the underground parking garage excavation site. Lifting soil
out of the pit slowed down the excavation progress.
Additionally a video camera was placed during day 26 of the observation period. 4
hours of video were recorded. A point cloud has been created prior to the recording and
a second one after the recording. The event processor, presented in Ogunmakin et al.
160 8 Case Studies
12000
10000
Excavated Volume in m3
8000
6000
4000
2000
0
0 10 20 30
Time in days
Figure 8.7: Volume plot of the underground parking garage excavation site.
2500
Measurements
Interpolation
2000
Excavated Volume in m3
1500
1000
500
0
0 10 20 30 36
Time in days
Figure 8.8: Performance factors calculated based on the volume calculations in Figure
8.7.
(2013), combined the tracking and activity estimation results to generate the statistics
illustrated in Figure 8.10. The statistics were generated by counting excavator bucket
loads and dump trucks entering and leaving the site. In order to measure the performance
8.2 Underground Parking Garage 161
2.5
2
Probability
1.5
0.5
0
0 200 400 600 800 1000 1200 1400
3
Excavated volume in m /day
Figure 8.9: Beta distribution estimated based on the performance factors in Figure 8.8
with α = 2.0244 and β = 4.2109. The mean performance was 429.21m3 /day.
of the algorithm the ground truth was manually counted in the video. A total of ntrucks =
22 were detected in the processed video, which exactly matched the ground truth. A total
of nbuckets = 171 bucket loads were detected compared to the ground truth of nbuckets =
177 bucket loads, implying an error of 3.4%. The total volume of the excavator’s bucket
was 2.5m3 . Using the ground truth nbuckets,gt = 177, results in a performance factor of
3 3
τv,gt = 2.5m4h·177 = 110 mh . Using the estimated nbuckets = 171, results in a performance
3
factor of τv = 2.5m4h·171 = 107/f racm3 h. Through the total excavated volume, calculated
on the basis of the point clouds, the corrected volume per bucket can be calculated. Using
3
the ground truth bucket count, this results in vbucketc or,gt = 418m
177
= 2.36m3 , while using
3
the measured data it results in vbucketc or,gt = 418m
171
= 2.44m3 , also yielding an error of
3.4%. The corrected bucket volume accounts for the swell factor of the soil as well as for
the fact that the buckets aren’t always completely filled. From the pie chart in Figure 8.10a
it is visible that the majority of the time, the excavators were waiting for the dumpers9 .
The presented results show that the photogrammetry approach can be used to monitor
the excavation process for long periods of time. Furthermore it shows that the additional
information obtained through the video analysis approach presented by Ogunmakin et al.
(2013) can be used to obtain deeper insight into the progress and to detect underlying
problems causing delays. The precise metrics obtained through this approach can be
used as feedback for a reactive scheduling model either updating performance factors or
estimating updated activity durations and their respective probability distributions.
9
This synergetic approach was developed in close cooperation with Gbolabo Ogunmakin, Patricio A.
Vela, André Borrmann, and Jochen Teizer.
162 8 Case Studies
15%
4% 8 Estimated
Ground Truth
6
Time / min
19% 4
61%
Absent
2
Static
Moving
0
Filling 0 4 8 12 16 22
Truck
(a) Pie chart with aggregate dump truck states (b) Loading time per truck
Figure 8.10: Aggregate dump truck statistics for underground parking garage. The left
part shows a pie chart with the ratios of time spent in the different states. The right part
shows the loading time for the individual trucks (Bügler et al., 2016).
8.3 Hospital Building 163
Performance factors for the entire period are visualized in Figure 8.13 and the resulting
probability distribution is plotted in Figure 8.14, which was confirmed to fit the data
using the goodness-of-fit test presented in Section 2.2.5. Due to the long recording period
in this case study, the data contains many days on which no excavation processes were
active. Therefore the calculated performance factors do not reflect the actual performance
of the excavation processes in this case. Detailed data on the times when excavation was
idling was not recorded, and therefore exact performance factors could not be calculated a
posteriori. Based on the areas with high activity it can be estimated that the first excavation
3
period between days 4 and 8 had a performance of 1498 md , while the second excavation
3
period between days 35 and 40 had a performance of 808 md . For better calculation of the
performance factors, machine data of the excavators could be used in order to measure
the amount of time they were running. Alternatively a video monitoring approach such
as the one presented in Section 6.2 can be used. Nevertheless this case study showed
that photogrammetry can be used to monitor the progress of large excavation projects
over long periods of time. Given additional machine data or video recordings, precise
calculation of performance factors would be possible.
164 8 Case Studies
20000
Excavated Volume in m3
15000
10000
5000
0
0 20 40 60 80
Time in days
6000 Measurements
Interpolation
Excavated Volume in m3
5000
4000
3000
2000
1000
0
0 20 40 60 80 90
Time in days
Figure 8.13: Performance factors calculated based on the volume calculations in Figure
8.12.
8.4 Summary 165
3.5
2.5
Probability
1.5
0.5
0
0 500 1000 1500 2000
3
Excavated volume in m /day
Figure 8.14: Beta distribution estimated based on the performance factors in Figure 8.13
with α = 2.4661 and β = 7.6387. The mean performance was 488.10m3 /day.
8.4 Summary
The results illustrate that the photogrammetry approach to measure the excavated volume
of an excavation pit can be used to monitor an excavation site for long periods of time.
Furthermore performance measures can be calculated and used to make predictions on
the future progress.
The video monitoring approach presented in Section 6.2 was successfully tested and
allowed the exact attribution of delays to underlying circumstances. This information
allows the decision makers to react appropriately to unexpected events. In combination
with the calculated performance factors and their respective probability distributions it is
possible to make accurate predictions for the future. Additionally, the data can be used to
estimate the performance of similar processes on other construction sites.
While the case studies were mostly performed without major problems, some issues
arose during their execution. Bad weather conditions, such as heavy rain, causing large
pools of water, forming large reflective surfaces caused problems when capturing point
clouds using photogrammetry. Since this did not occur during long periods of time and
only single sessions didn’t return a usable point cloud, the problems were practically
neglectable.
In case of the hospital construction site there were long periods of time where no
excavation took place. Those periods need to be marked manually not to be taken into
account when performance factors are updated. Commonly this is happening during the
anchoring of bored and sheet pile walls.
166 9 Summary and Outlook
While unexpected delays in large construction projects commonly render plans invalid
and require replanning, taking new circumstances into account, scheduling in construction
industry is still performed in a manual and error-prone way. According to Koehn et al.
(1978) unexpected delays comprise up to 30 percent of the overall project costs.
This thesis presented a concept for the reactive scheduling of large shoring and ex-
cavation projects including new scheduling methods and on-site data acquisition. Ad-
ditionally methods for modeling, simulation, and optimization of construction projects
are introduced. Furthermore a representation of uncertainty is included and methods for
including those methods into the concept are presented.
The main contributions of this thesis are the new pseudo-serial schedule generation
scheme (Section 4.1), the iterative limited-depth tree search algorithm (Section 4.3) for
resource constrained project scheduling problems (RCPSPs), the application of the water-
wave optimization algorithm by Zheng (2015) to RCPSPs (Section 4.2), the uncertainty
extension of the project scheduling library PSPLIB (Section 4.5), as well as the automatic
progress tracking of excavation processes using point clouds (Section 6.1) and its combi-
nation with video-based monitoring (Section 6.2). A number of theoretical and practical
case studies have been performed using the project scheduling library PSPLIB (Section
4.4) and its extenstion, the UPSPLIB (Section 4.5), as well as three different real-life
construction projects which have been accompanied in the course of this thesis (Chapter
8).
Chapter 2 introduced the state-of-the-art in modeling and simulation of construction
operations. A main measure to estimate how long a machine or worker takes to execute a
certain activity are performance factors quantifying the amount of work done in a certain
time (Section 2.1). Those performance factors, and therefore the durations of activities,
are subject to uncertainties which can be modeled using different probability distributions
(Section 2.2). A measure on how good a distribution fits a given set of data can be ac-
quired using goodness-of-fit tests (Section 2.2.5). Furthermore, Monte-Carlo-Simulation
can be used to estimate the consequences of uncertainties in a larger context, such as a
whole construction project (Section 2.2.6). Another, if not the most important aspect of
a construction project is the cost factor (Section 2.3). Costs can be generated in numer-
ous ways and can be static or dynamic. They are created through the use of materials,
resources, workers, or even fines that have to be paid due to missed milestones. A very
important decision to be made when modeling construction operations is the granularity
of the model (Section 2.4).
167
While it is possible to model the precise movement of each machine, such a model
is hard to evaluate and even harder to maintain, when changes occur. Therefore it is
wise to make abstractions where they do not have drastic effect on the outcome. For
instance a two step method can be used. First modeling construction processes, such
as the production of a bored pile wall, as a process model, based on the performance
factors of the involved machines (Section 2.5). The resulting process model can then be
used to obtain an expected duration for the production of that bored pile wall, which then
can be included as one activity into a model of the entire project. For the purpose of
modeling such processes, different models can be found in literature, such as CYCLONE
(Section 2.5.4) or STROBOSCOPE (Section 2.5.5). The resulting construction project
model needs to contain specific information in order to adequately represent the system
to be studied (Section 2.6.2). This information can in part be automatically obtained from
digital representations of the building to be constructed (Section 2.6.3). Once a model has
been successfully created it can be used to perform extensive simulation studies (Section
2.6.4) as well as generate and optimize construction schedules.
Chapter 3 presents a detailed survey on the state-of-the-art in construction schedule
optimization. In any optimization endeavor an objective is of utmost importance (Section
3.2). The possible objectives include costs, makespan, as well as other schedule metrics.
As previously stated, uncertainty is an important aspect of any construction schedule.
Therefore the robustness of a schedule is an important schedule metric (Section 3.2.1).
As often multiple objectives are in the scope of the decision makers, multi-criteria op-
timization methods are of high interest (Section 3.2.2). The project scheduling problem
(PSP) is the problem to schedule a given set of activities under precedence constraints
(Section 3.3). This can be solved using the critical path method (Section 3.3.3) or the pro-
gramm evaluation and review technique (Section 3.3.4). This, however, neglects the lim-
ited availability of resources, which adds considerably more complexity the problem. The
resulting problem is called the resource constrained project scheduling problem (Section
3.4). In order to generate feasible schedules for a given RCPSP, two elementary schedule
generation schemes, the serial and the parallel scheme, can be found in literature (Section
3.4.4). Problems can be evaluated prior to scheduling using different problem character-
istics (Section 3.4.2).
Furthermore schedules can be generated by simulation (Section 3.4.5) such as the
constraint-based discrete-event simulation. Using simulation-based methods allows to
obtain additional measures of the schedules, such as the critical path and float times (Sec-
tion 3.4.6). Generally RCPSPs can be solved in a deterministic way (Section 3.4.7), but
these methods are hardly feasible on a large scale. Therefore heuristic approaches are
subject to extensive research (Section 3.4.8) and numerous methods have been published
in the last decades (Section 3.4.8.9). Additionally Kolisch & Sprecher (1996) published a
project scheduling problem library (PSPLIB) which contains more than 2000 benchmark
problems that are widely used in literature (Section 3.4.9).
Chapter 4 presents the main contributions in the field of construction scheduling of
this thesis. The pseudo-serial schedule generation, as a compromise of the serial and
parallel schedule generation schemes, combines the advantages of both other schemes. It
can generate a superset of the set of active schedules, the set of pseudo-active schedules,
while at the same time being computationally more efficient and having a more continuous
168 9 Summary and Outlook
parameter space, that has an increased likelihood of finding good solutions in a confined
area (Section 4.1).
Built upon the new scheduling scheme, the water wave optimization algorithm by
Zheng (2015) was applied to RCPSPs and has been shown to produce results outperform-
ing many other published heuristics (Section 4.2). The performance of the algorithm was
also compared with the results of using the traditional scheduling schemes. The advantage
of the pseudo-serial scheduling scheme was shown to be the adjustability of the search
space, as well as faster convergence of the solution.
Additionally another heuristic algorithm, the iterative swap-based limited depth tree
search was presented (Section 4.3). Compared to the water wave optimization algorithm
the results of this algorithm were worse, but since it starts from a given schedule by
applying swaps and delays, it can be particularly useful for updating available schedules
in the context of reactive re-planning.
The commonly used PSPLIB lacks the availability of problems containing uncertain
activity durations. Therefore it was extended by adding beta distributed delays to each of
the activities of each of the PSPLIB single mode problems. Additionally eight objective
functions have been designed to allow optimization for different aspects of the uncer-
tainties using Monte-Carlo-Simulation (Section 4.5). If newly acquired data changes the
circumstances, durations, performance factors, or probability distributions, an update of
the existing schedule might be required, forming the reactive component of the proposed
approach (Section 4.6).
Experiments performed using the PSPLIB showed clearly that the presented algo-
rithms compare well to other heuristic algorithms in the field of RCPSPs and even out-
perform many of them (Section 4.4) . Furthermore, it has been shown that the adaption
of the water wave optimization to RCPSPs using the pseudo-serial schedule generation
scheme is capable of creating good and robust solutions for the uncertainty extensions of
the PSPLIB (Section 4.5).
Chapter 5 provides an overview of the state-of-the-art in data acquisition on construc-
tion sites. Most commonly, manual protocol-based tracking, such as noting milestones
in spreadsheets, is used for progress monitoring (Section 5.1). Additionally some mod-
ern machines record data on the machine state during operation, which can be used for
progress monitoring (Section 5.2). Another viable data source is laser scanning (Section
5.3). However, due to its need for trained personnel and expensive equipment, modern
advances in the field of photogrammetry have gained increased interest by the commu-
nity (Section 5.4). Additionally radio-based tracking, including GPS, and active, as well
as passive RFID methods, are drawing attention, due to the steadily decreasing costs for
the required hardware (Section 5.6). Only recently, methods of range imaging and video
analysis have become computationally and technically feasible enough to have entered
the realm of construction monitoring research (Section 5.7).
Chapter 6 described advances in the progress monitoring of excavation processes that
were made in the course of this thesis and form additional contributions to the field of con-
struction progress monitoring. In order to track the total volume of excavated soil a point
cloud based method is presented (Section 6.1). The point clouds are obtained through pho-
togrammetry and reduced to the area of interest by smart clustering and cleaning methods.
169
Finally the volume of the resulting representation of an excavation pit is calculated and
can be used as a metric for the excavation progress. As this method only yields the abso-
lute excavated volume, it does not allow to track down the reasons for observed delays.
For this reason the method has been extended by a video-based tracking algorithm which
was developed in synergy with Gbolabo Ogunmakin, Patricio A. Vela from Georgia Tech,
André Borrmann, and Jochen Teizer (Section 6.2). This approach allows to count the
number of trucks entering and leaving the site, as well as the number of buckets loaded
onto the trucks. Furthermore the state of excavators and trucks can be determined. This
allows for detailed analysis of the reason for delays, such as a lack of trucks or excavators.
Chapter 7 summarizes the overall concept presented in this thesis by illustrating how
the individual contributions interlock into a synergic approach for data-driven reactive
scheduling. The concept is furthermore illustrated by an executed example of a simple
schedule. The example shows how re-planning effects the performance of a construction
project and how the modeling of uncertainty and robustness can be utilized in a multi-
objective optimization approach in order to reduce the probability of requiring re-planning
during project execution.
Chapter 8 presents three case studies that have been performed on real construction
sites in order to evaluate the performance of the contributions of this thesis. The case stud-
ies showed that the photogrammetry-based progress monitoring of excavation sites pro-
vides a powerful tool for monitoring excavation processes under different circumstances.
Time series of point clouds were recorded and the progress of excavation could be tracked
precisely. Additionally it was shown, that through the addition of the video-based ap-
proach by Bügler et al. (2016) the underlying reasons for delays could be determined
efficiently. Through the calculation of excavated volume and the detection of machine
states, periods of low performance could be attributed to the involved resources and their
performance factors could be estimated accordingly.
While the presented concept is applicable in the discussed scenarios, it is subject
to several limitations. The presented scheduling algorithms, given scenarios of mod-
ern large scale construction projects involving thousands of activities are still requiring
large amounts of computational power and long periods of time. Finding the right level
of abstraction is vital. However combining small activities to reduce the complexity of
the model may create inaccurate resource allocations. On the other hand the presented
methods will, even in complex cases, be able to return good solutions in acceptable time.
Construction sites that are very constricted and offer little possibility to be photographed
from outside the pit are hard to be monitored using the photogrammetry approach. Fur-
thermore the video analysis concept requires cameras to be on site. Those cameras need
power and a way to transfer the recorded image data to a server for processing. If no wire-
less network or fast mobile network is available, storage cards would need to be retrieved
from the cameras, making the approach rather impractical. The use of unmanned aerial
vehicles (UAV) may be a solution to this limitation.
In practice many construction projects involve different contractors to perform dif-
ferent activities. Once the initial plans are in motion it may not be possible to shift the
activity performed by another contractor in time as the contractor works on his own sched-
ule on different projects. Furthermore different parts such as precast concrete parts might
170 9 Summary and Outlook
be produced just in time and be delivered to the site. If there is no space to intermedi-
ately store the delivered parts, shifting that activity may not be possible. In the case of
excavation and shoring operations, the projects are however usually performed by a single
contractor and hardly involve any large just in time delivered parts.
The presented work could be extended by a more detailed representation of the costs of
construction projects. The transportation of additional resources to and from the construc-
tion sites create costs that are currently not reflected in the described representation. The
presented scheduling algorithms could be extended towards to resource leveling problem
(RLP) in order to compare the results with respect to robustness. Additionally the per-
formance factors for excavation processes could be extended. Špačková & Straub (2013)
presented a dynamic Bayesian network approach to model the uncertainty in tunnel exca-
vation based on geological data, taking extraordinary events into account. Such a model
could also be implemented for regular excavation processes. In the process model of ex-
cavation processes, the excavation pit could be split into multiple excavation areas, as is
done in the example in Section 3.1. The capability of calculating the volumes of each
of the excavation areas could be added to the point cloud-based excavation monitoring
approach.
171
10 References
Agarwal, A.; Colak, S. & Erenguc, S. (2011), A Neurogenetic approach for the resource-
constrained project scheduling problem, Computers & Operations Research, 38(1), 44–
50, ISSN 03050548, doi:10.1016/j.cor.2010.01.007.
Ahmadabadian, A. H.; Robson, S.; Boehm, J.; Shortis, M.; Wenzel, K. & Fritsch, D.
(2013), A comparison of dense matching algorithms for scaled surface reconstruction
using stereo camera rigs, ISPRS Journal of Photogrammetry and Remote Sensing, 78,
157–167, ISSN 09242716, doi:10.1016/j.isprsjprs.2013.01.015.
Alcaraz, J. & Maroto, C. (2001), A Robust Genetic Algorithm for Resource Allocation
in Project Scheduling, Annals of Operations Research, 102, 83–109, doi:10.1023/A:
1010949931021.
Alcaraz, J. & Maroto, C. (2009), Genetic Algorithms for the Resource-Constrained
Project Scheduling Problem, BEIO, Boletín de Estadística e Investigación Operativa,
25(1), 22–31.
172 10 References
Alvarez-Valdes, R.; Crespo, E.; Tamarit, J. & Villa, F. (2008), GRASP and path relinking
for project scheduling under partially renewable resources, European Journal of Opera-
tional Research, 189(3), 1153–1170, ISSN 03772217, doi:10.1016/j.ejor.2006.06.073.
Baloi, D. & Price, A. D. (2003), Modelling global risk factors affecting construction
cost performance, International Journal of Project Management, 21(4), 261–269, ISSN
02637863, doi:10.1016/S0263-7863(02)00017-0.
Banks, J.; Carson, J.; Nelson, B. & Nicol, D. (2001), Discrete-Event System Simulation,
Prentice Hall, ISBN 0-13-088702-1.
Barber, C. B.; Dobkin, D. P. & Huhdanpaa, H. (1996), The Quickhull algorithm for con-
vex hulls, ACM Transactions On Mathematical Software, 22(4), 469–483.
173
Bettemir, Ö. H. & Sonmez, R. (2012), Hybrid Genetic Algorithm with Simulated An-
nealing for Resource-Constrained Project Scheduling, Journal of Management in En-
gineering, 31(5), 1–8, doi:10.1061/(ASCE)ME.1943-5479.0000323.
174 10 References
Björk, B.-C. (1989), Basic structure of a proposed building product model, Computer-
Aided Design, 21(2), 71–78, doi:10.1016/0010-4485(89)90141-3.
Björk, B.-C. (1992), A Unified Approach for Modeling Construction Information, Build-
ing and Environment, 27(2), 173–194.
Blazewicz, J.; Lenstra, J. & Kan, A. (1983), Scheduling subject to resource constraints:
classification and complexity, Discrete Applied Mathematics, 5(1), 11–24.
Blum, C. & Roli, A. (2003), Metaheuristics in combinatorial optimization: overview and
conceptual comparison, ACM Computing Surveys, 35(3), 189–213, ISSN 02545330,
doi:10.1007/s10479-005-3971-7.
Boctor, F. F. (1990), Some efficient multi-heuristic procedures for resource-constrained
project scheduling, European Journal of Operational Research, 49(1), 3–13, ISSN
03772217, doi:10.1016/0377-2217(90)90116-S.
Boctor, F. F. (1993), Heuristics for scheduling projects with resource restrictions and sev-
eral resource-duration modes, International Journal of Production Research, 31(11),
2547–2558, ISSN 0020-7543, doi:10.1080/00207549308956882.
Boctor, F. F. (1996), Resource-constrained project scheduling by simulated annealing,
International Journal of Production Research, 34(8), 2335–2351, ISSN 0020-7543,
doi:10.1080/00207549608905028.
Bogenstätter, U. (2000), Prediction and optimization of life-cycle costs in early design,
Building Research & Information, 28(5-6), 376–386, ISSN 0961-3218, doi:10.1080/
096132100418528.
Bornaz, L. & Rinaudo, F. (2004), Terrestrial laser scanner data processing, Proceedings
of XX ISPRS Commission V Congress, 45(B5), 514–519.
Bosché, F. (2010), Automated recognition of 3D CAD model objects in laser scans and
calculation of as-built dimensions for dimensional compliance control in construction,
Advanced Engineering Informatics, 24(1), 107–118, ISSN 14740346, doi:10.1016/j.
aei.2009.08.006.
Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C. T. & Haas, R. (2015), The value of integrat-
ing Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser
scanning and BIM: The case of cylindrical MEP components, Automation in Construc-
tion, 49, 201–213, ISSN 09265805, doi:10.1016/j.autcon.2014.05.014.
Bosche, F. & Haas, C. (2008), Automated retrieval of 3D CAD model objects in con-
struction range images, Automation in Construction, 17(4), 499–512, ISSN 09265805,
doi:10.1016/j.autcon.2007.09.001.
Böttcher, J. & Drexl, A. (1999), Project scheduling under partially renewable resource
constraints, Management Science, 45(4), 543–559.
175
Bouleimen, K. & Lecocq, H. (2003), A new efficient simulated annealing algorithm for
the resource-constrained project scheduling problem and its multiple mode version,
European Journal of Operational Research, 149(2), 268–281, ISSN 03772217, doi:
10.1016/S0377-2217(02)00761-0.
Brandt, S. & Bode, M. (2014), Ablaufplanung unter Berücksichtigung von Unschärfe, in
Forum Bauinformatik 2014.
Braun, A.; Tuttas, S.; Borrmann, A. & Stilla, U. (2015a), A concept for automated con-
struction progress monitoring using BIM-based geometric constraints and photogram-
metric point clouds, Journal of Information Technology in Construction, 20(8), 68–79.
Braun, A.; Tuttas, S.; Borrmann, A. & Stilla, U. (2015b), Automated progress moni-
toring based on photogrammetric point clouds and precedence relationship graphs, in
The 32nd International Symposium on Automation and Robotics in Construction and
Mining (ISARC).
Brooks, G. H. & White, C. R. (1965), An algorithm for finding optimal or near opti-
mal solutions to the production scheduling problem, Journal of Industrial Engineering,
16(1), 34.
Brooks, R. J. & Tobias, a. M. (1996), Choosing the best model: Level of detail, com-
plexity, and model performance, Mathematical and Computer Modelling, 24(4), 1–14,
ISSN 08957177, doi:10.1016/0895-7177(96)00103-3.
Brown, D. C. (1976), The bundle adjustment—progress and prospects, Int. Archives Pho-
togrammetry, 21(3), 1.
Browning, T. R. & Yassine, A. a. (2010), Resource-constrained multi-project scheduling:
Priority rule performance revisited, International Journal of Production Economics,
126(2), 212–228, ISSN 09255273, doi:10.1016/j.ijpe.2010.03.009.
Bügler, M. & Borrmann, A. (2014), Using Swap-Based Search Trees to obtain Solu-
tions for Resource Constrained Project Scheduling Problems, in Proceedings in Applied
Mathematics and Mechanics, doi:10.1002/pamm.201410385.
Bügler, M.; Dori, G. & Borrmann, A. (2013), Swap Based Process Schedule Optimization
using Discrete-Event Simulation, in Proceedings of the International Conference on
Construction Applications of Virtual Reality, doi:10.13140/RG.2.1.2008.8162.
Bügler, M.; Ogunmakin, G.; Vela, P. A.; Borrmann, A. & Teizer, J. (2016), Fusion of Pho-
togrammetry and Video Analysis for Productivity Assessment of Earthwork Processes
(Accepted), Computer-Aided Civil and Infrastructure Engineering.
Błażewicz, J.; Domschke, W. & Pesch, E. (1996), The job shop scheduling problem:
Conventional and new solution techniques, European Journal of Operational Research,
93(1), 1–33, ISSN 03772217, doi:10.1016/0377-2217(95)00362-2.
176 10 References
Calhoun, K. M.; Deckro, R. F.; Moore, J. T.; Chrissis, J. W. & Van Hove, J. C. (2002),
Planning and re-planning in project and production scheduling, Omega, 30(3), 155–
170, ISSN 03050483, doi:10.1016/S0305-0483(02)00024-5.
Cameron, W. B. (1963), Informal sociology: A casual introduction to sociological think-
ing, Random House New York.
Chahrour, R. (2006), Integration von CAD und Simulation auf Basis von Produktmod-
ellen im Erdbau.
Chahrour, R. & Franz, V. (2006), Seamless data model for a CAD-based simulation sys-
tem, in Proceedings of the Joint International Conference on Computing and Decision
Making in Civil and Building Engineering, pp. 3958–3967.
Chan, W.; Chua, D. & Kannan, G. (1996), Construction resource scheduling with genetic
algorithms, Journal of Construction Engineering and Management, 122(2), 125–132.
Chassein, A. & Goerigk, M. (2016), A bicriteria approach to robust optimization, Com-
puters & Operations Research, 66, 181–189, ISSN 03050548, doi:10.1016/j.cor.2015.
08.007.
Chen, R. M.; Wu, C. L.; Wang, C. M. & Lo, S. T. (2010), Using novel parti-
cle swarm optimization scheme to solve resource-constrained scheduling problem in
PSPLIB, Expert Systems with Applications, 37(3), 1899–1910, ISSN 09574174, doi:
10.1016/j.eswa.2009.07.024.
Cheng, M.-Y. & Chen, J.-C. (2002), Integrating barcode and GIS for monitoring con-
struction progress, Automation in Construction, 11(1), 23–33, ISSN 09265805, doi:
10.1016/S0926-5805(01)00043-7.
Cheng, T.; Venugopal, M.; Teizer, J. & Vela, P. (2011), Performance evaluation of
ultra wideband technology for construction resource location tracking in harsh en-
vironments, Automation in Construction, 20(8), 1173–1184, ISSN 09265805, doi:
10.1016/j.autcon.2011.05.001.
Cheok, G. S. & Stone, W. C. (1999), Non-intrusive scanning technology for construction
assessment, in Proceedings of the 16th International Symposium on Automation and
Robotics in Construction, pp. 645–650.
Cheok, G. S.; Stone, W. C.; Lipman, R. R. & Witzgall, C. (2000), Ladars for construction
assessment and update, Automation in Construction, 9(5-6), 463–477, ISSN 09265805,
doi:10.1016/S0926-5805(00)00058-3.
Chtourou, H. & Haouari, M. (2008), A two-stage-priority-rule-based algorithm for robust
resource-constrained project scheduling, Computers & Industrial Engineering, 55(1),
183–194, ISSN 03608352, doi:10.1016/j.cie.2007.11.017.
Clark, W.; Polakov, W. N. & Trabold, F. W. (1922), The Gantt chart: A working tool of
management, The Ronald Press Company.
177
Dorigo, M.; Maniezzo, V. & Colorni, A. (1996), Ant system: Optimization by a colony
of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics, Part B:
Cybernetics, 26(1), 29–41, ISSN 10834419, doi:10.1109/3477.484436.
Dossick, C. S. & Neff, G. (2010), Organizational Divisions in BIM-Enabled Commercial
Construction, Journal of Construction Engineering and Management, 136, 459–467,
ISSN 0733-9364, doi:10.1061/(ASCE)CO.1943-7862.0000109.
Drexl, a. (1991), Scheduling of Project Networks by Job Assignment, Management Sci-
ence, 37(12), 1590–1602, ISSN 0025-1909, doi:10.1287/mnsc.37.12.1590.
Dzeng, R.-J. & Tommelein, I. D. (2004), Product modeling to support case-based con-
struction planning and scheduling, Automation in Construction, 13(3), 341–360, ISSN
09265805, doi:10.1016/j.autcon.2003.10.002.
Echeverry, D. (1996), Adaptation of barcode technology for construction project control,
in Proceedings of the ASCE Conference on Computing in Civil Engineering, pp. 1034–
1040.
El-Omari, S. & Moselhi, O. (2011), Integrating automated data acquisition technolo-
gies for progress reporting of construction projects, Automation in Construction, 20(6),
699–705, ISSN 09265805, doi:10.1016/j.autcon.2010.12.001.
Elsayed, E. A. (1982), Algorithms for project scheduling with resource constraints, In-
ternational Journal of Production Research, 20(1), 95–103, ISSN 0020-7543, doi:
10.1080/00207548208947751.
Feng, C.-W.; Liu, L. & Burns, S. a. (1997), Using Genetic Algorithms to Solve Construc-
tion Time-Cost Trade-Off Problems, Journal of Computing in Civil Engineering, 11(3),
184–189, ISSN 0887-3801, doi:10.1061/(ASCE)0887-3801(1997)11:3(184).
Fente, J.; Knutson, K. & Schexnayder, C. (1999), Defining a Beta distribution function
for construction simulation, in Proc. of the 1999 Winter Simulation Conference, ISBN
0-7803-5780-9, ISSN 02750708, pp. 1010–1015, doi:10.1109/WSC.1999.816813.
Fischer, M. & Aalami, F. (1996), Scheduling with Computer-Interpretable Construction
Method Models, Construction Engineering and Management, 122(4), 337–347, ISSN
0733-9364, doi:10.1061/(ASCE)0733-9364(1996)122:4(337).
Fishman, G. S. & Kiviat, P. J. (1967), The statistics of discrete-event simulation, Simula-
tion, 10(4), 185–195.
Flachs, G. M.; Thompson, W. E.; Black, R. J.; Taylor, J. M.; Cannon, W.; Rogers, R.
& Others (1977), An automatic video tracking system, in Proceedings of the National
Aerospace and Electronics Conference, pp. 361–368.
Flachs, G. M.; Thompson, W. E.; Gilbert, A. L. & Others (1976), A real-time Struc-
tural Tracking Algorithm, in Proceedings of the National Aerospace and Electronics
Conference, pp. 161–168.
179
Herroelen, W. & Leus, R. (2004), Robust and reactive project scheduling: a review
and classification of procedures, International Journal of Production Research, 42(8),
1599–1620.
Holland, J. H. (1975), Adaptation in Natural and Artificial Systems, The University of
Michigan Press.
Hoos, H. H. & Stützle, T. (2004), Stochastic local search: Foundations & applications,
Elsevier.
Hoppe, H. (1999), New quadric metric for simplifying meshes with appearance attributes,
Proceedings Visualization ’99 (Cat. No.99CB37067), ISSN 1, doi:10.1109/VISUAL.
1999.809869.
Horenburg, T.; Wimmer, J. & Günthner, W. A. (2012), Resource Allocation in Construc-
tion Scheduling based on Multi-Agent Negotiation, in Proceedings of the 14th Int.
Conference on Computing in Civil & Building Engineering.
Huang, R.-y. & Hsieh, B.-c. (2012), System Modeling of Object-Oriented Construction
Process Simulations, in Proceedings of the 29th ISARC, pp. 1–5.
Huhnt, W. & Enge, F. (2006), Can algorithms support the specification of construction
schedules, Journal of Information Technology in Construction, 11, 547–564, ISSN
14006529.
Jackson, J. (1957), Simulation research on job shop production, Naval Research Logistics
Quarterly, 4(4), 287–295.
Jaselskis, E.; Anderson, M. R.; Jahren, C. T.; Rodriguez, Y. & Njos, S. (1995), Radio-
frequency identification applications in construction industry, Journal of Construction
Engineering and Management, 121(2), 189– 196.
Jaśkowski, P. & Sobotka, A. (2006), Scheduling construction projects using evolutionary
algorithm, Journal of Construction Engineering and Management, 132(August), 861–
870.
Jedrzejowicz, P. & Ratajczak-Ropel, E. (2014), Reinforcement Learning strategies for A-
Team solving the Resource-Constrained Project Scheduling Problem, Neurocomputing,
146, 301–307, ISSN 09252312, doi:10.1016/j.neucom.2014.05.070.
Józefowska, J.; Mika, M. & Różycki, R. (2001), Simulated annealing for multi-mode
resource-constrained project scheduling, Annals of Operations Research, 102(1-4),
137–155.
Kagioglou, M.; Cooper, R.; Aouad, G. & Sexton, M. (2000), Rethinking Construction:
The Generic Design and Construction Process Protocol, Engineering Construction and
Architectural Management, 7(2), 141–153, ISSN 0969-9988, doi:10.1108/eb021139.
Kahlmann, T.; Remondino, F. & Guillaume, S. (2007), Range Imaging Technology: New
Developments and applications for people identification and tracking, in Proceedings of
the SPIE - The International Society for optical Engineering, doi:10.1117/12.702512.
182 10 References
Kargul, A.; Bügler, M.; Borrmann, A. & Günthner, W. A. (2015), Web based field data
analysis and data-driven simulation application for construction performance predic-
tion, Journal of Information Technology in Construction, 20, 479–494.
Kazhdan, M.; Bolitho, M. & Hoppe, H. (2006), Poisson surface reconstruction, in Pro-
ceedings of the fourth Eurographics symposium on Geometry processing, Eurographics
Association, Aire-la-Ville, Switzerland, Switzerland, SGP ’06, ISBN 3-905673-36-3,
pp. 61–70.
Kelley, J. & Walker, M. (1961), Critical-Path Planning and Scheduling: Mathematical
Basis, Operations Research, 9(3), 296–320.
Kern, F. (2002), Precise determination of volume with terestical 3D-laserscanner, in
Geodesy for Geotechnical and Structural Engineering II, pp. 531–534.
Kim, K.; Yun, Y.; Yoon, J.; Gen, M. & Yamazaki, G. (2005), Hybrid genetic algorithm
with adaptive abilities for resource-constrained multiple project scheduling, Computers
in Industry, 56(2), 143–160, ISSN 01663615, doi:10.1016/j.compind.2004.06.006.
Kiran, A. S. & Smith, M. L. (1984), Simulation studies in job shop scheduling - A
Survey, Computers & Industrial Engineering, 8(2), 87–93, ISSN 03608352, doi:
10.1016/0360-8352(84)90002-0.
Kirkpatrick, S.; Gelatt, C. D. & Vecchi, M. P. (1983), Optimization by Simulated An-
nealing, Science, 220(4598), 671–680, ISSN 00368075, doi:10.1126/science.220.4598.
671.
Kiziltas, S.; Burcu, A.; Ergen, E. & Pingbo, T. (2008), Technological assessment and
process implications of field data capture technologies for construction and facil-
ity/infrastructure management, ITcon, 13, 134–154.
Klaubert, C. & Günthner, W. A. (2011), Entwicklung eines RFID-basierten Informations-
und Kommunikationssystems für die Baulogistik, Ph.D. thesis.
Klaubert, C.; Schorr, M. & Günthner, W. A. (2010), Real time construction progress
control using NFC, ITG-Fachbericht - RFID Systech.
Klein, R. (2000), Bidirectional planning: Improving priority rule-based heuristics for
scheduling resource-constrained projects, European Journal of Operational Research,
127(3), 619–638, ISSN 03772217, doi:10.1016/S0377-2217(99)00347-1.
Klein, R. W. & Baris, P. M. (1991), Selecting and generating variates for modeling service
times, Computers & Industrial Engineering, 20(1), 27–33.
Kochetov, Y. & Stolyar, A. (2003), Evolutionary local search with variable neighborhood
for the resource constrained project scheduling problem, in Proceedings of the Work-
shop on Computer Science and Information Technologies CSIT’2003.
Koehn, E.; Young, R.; Kuchar, J. & Seling, F. (1978), Cost of delays in construction,
Journal of the Construction Division, 104(3), 323–331.
183
Kolisch, R. (1996a), Efficient priority rules for the resource-constrained project schedul-
ing problem, Journal of Operations Management, 14(3), 179–192, ISSN 02726963,
doi:10.1016/0272-6963(95)00032-1.
Kolisch, R. (1996b), Serial and parallel resource-constrained project scheduling methods
revisited: Theory and computation, European Journal of Operational Research, 90(2),
320–333, doi:10.1016/0377-2217(95)00357-6.
Kolisch, R. & Drexl, A. (1996), Adaptive search for solving hard project scheduling prob-
lems, Naval Research Logistics, 43(1), 23–40, ISSN 0894069X, doi:10.1002/(SICI)
1520-6750(199602)43:1<23::AID-NAV2>3.3.CO;2-4.
Kolisch, R. & Hartmann, S. (1999), Heuristic algorithms for the resource-constrained
project scheduling problem: Classification and computational analysis, in Weglarz, J.
(ed.), Handbook on Recent Advances in Project Scheduling, Springer, ISBN 978-1-
4613-7529-6, pp. 147–178, doi:10.1007/978-1-4615-5533-9{\_}7.
Kolisch, R. & Hartmann, S. (2006), Experimental investigation of heuristics for resource-
constrained project scheduling: An update, European Journal of Operational Research,
174(1), 23–37, ISSN 03772217, doi:10.1016/j.ejor.2005.01.065.
Kolisch, R. & Sprecher, A. (1996), Project Scheduling Problem Library (PSPLIB).
Kolisch, R. & Sprecher, A. (1997), PSPLIB - A project scheduling problem library,
European Journal of Operational Research, 96(1), 205–216, ISSN 03772217, doi:
10.1016/S0377-2217(96)00170-1.
König, M. (2011a), Generation of robust construction schedules using evolution strate-
gies, in Proceedings of the 2011 ASCE International Conference, 2005, pp. 1–8.
König, M. (2011b), Robust construction scheduling using discrete-event simulation, in
Proceedings of the 2011 ASCE International Workshop on Computing in Civil En-
gineering, ASCE, American Society of Civil Engineers, pp. 446–453, doi:10.1061/
41182(416)55.
König, M. & Beißert, U. (2009), Construction scheduling optimization by simulated an-
nealing, in Proceedings of ASCE Workshop on Computing in Civil Engineering, Austin,
TX, pp. 183–190.
König, M.; Beißert, U.; Steinhauer, D. & Bargstädt, H.-J. (2007), Constraint-based Simu-
lation of Outfitting Processes in Shipbuilding and Civil Engineering, in Proceedings of
the 6th EUROSIM Congress on Modeling and Simulation.
König, M.; Koch, C.; Habenicht, I. & Spieckermann, S. (2012), Intelligent BIM-based
construction scheduling using discrete event simulation, in Proceedings of the 2012
Winter Simulation Conference, ISBN 9781467347808, pp. 1219–1229.
184 10 References
Likins, G. E.; Rausche, F.; Goble, G. & Shafer, N. (1999), Pile Installation Recording
System.
Lin, Y.-C.; Cheung, W.-F. & Siao, F.-C. (2014), Developing mobile 2D barcode/RFID-
based maintenance management system, Automation in Construction, 37, 110–121,
ISSN 09265805, doi:10.1016/j.autcon.2013.10.004.
Long, L. D. & Ohsato, A. (2009), A genetic algorithm-based method for scheduling
repetitive construction projects, Automation in Construction, 18(4), 499–511, ISSN
09265805, doi:10.1016/j.autcon.2008.11.005.
Lowe, D. (1999), Object recognition from local scale-invariant features, in Proceedings of
the Seventh IEEE International Conference on Computer Vision, IEEE, ISBN 0-7695-
0164-8, pp. 1150–1157, doi:10.1109/ICCV.1999.790410.
Lowe, D. (2004), Distinctive Image Features from Scale-Invariant Keypoints, Int. J. Com-
put. Vision, 60(2), 91–110, doi:10.1023/B:VISI.0000029664.99615.94.
Lu, M.; Chen, W.; Shen, X.; Lam, H. C. & Liu, J. (2007), Positioning and tracking con-
struction vehicles in highly dense urban areas and building construction sites, Automa-
tion in Construction, 16(5), 647–656, ISSN 09265805, doi:10.1016/j.autcon.2006.11.
001.
Lu, M.; Lam, H.-C. & Dai, F. (2008), Resource-constrained critical path analysis based on
discrete event simulation and particle swarm optimization, Automation in Construction,
17(6), 670–681, doi:10.1016/j.autcon.2007.11.004.
MacCrimmon, K. R. & Ryavec, C. a. (1964), An Analytical Study of the PERT Assump-
tions, doi:10.1287/opre.12.1.16.
Makadia, A.; Patterson, A. & Daniilidis, K. (2006), Fully Automatic Registration of 3D
Point Clouds, Computer Vision and Pattern Recognition, 2006 IEEE Computer Society
Conference on, 1, 1297–1304, ISSN 1063-6919, doi:10.1109/CVPR.2006.122.
Malcolm, D. G.; Roseboom, J. H.; Clark, C. E. & Fazar, W. (1959), Application of a
Technique for Research and Development Program Evaluation, Operations Research,
7(5), 646–669, ISSN 0030-364X, doi:10.1287/opre.7.5.646.
Marler, R. T. & Arora, J. S. (2004), Survey of multi-objective optimization methods
for engineering, Structural and Multidisciplinary Optimization, 26(6), 369–395, ISSN
1615147X, doi:10.1007/s00158-003-0368-6.
Martinez, J. & Ioannou, P. (1994), General purpose simulation with Stroboscope, in
Proc. of the Winter Simulation Conference, ISBN 0-7803-2109-X, ISSN 02750708,
pp. 1159–1166, doi:10.1109/WSC.1994.717503.
Marx, A. & König, M. (2011), Preparation of Constraints for Construction Simulation,
in Proceedings of the 2011 ASCE International Workshop on Computing in Civil Engi-
neering, pp. 1–8.
186 10 References
Rabbani, T.; Dijkman, S.; van den Heuvel, F. & Vosselman, G. (2007), An inte-
grated approach for modelling and global registration of point clouds, ISPRS Jour-
nal of Photogrammetry and Remote Sensing, 61(6), 355–370, ISSN 09242716, doi:
10.1016/j.isprsjprs.2006.09.006.
Ranganathananda, S. (1990), Swami Vivekananda and Human Excellence, Advaita
Ashrama.
Ray, S. J. & Teizer, J. (2012), Real-time construction worker posture analysis for
ergonomics training, Advanced Engineering Informatics, 26(2), 439–455, ISSN
14740346, doi:10.1016/j.aei.2012.02.011.
Riley, L. (2012), Managing and Controlling Risk in Complex Infrastructure Projects: Us-
ing Discrete Event Simulation for Stochastic Scheduling in Construction Engineering,
in Proceedings of the Autumn Simulation Multi-Conference.
Rivera, J. (2013), A Hybrid Heuristic Algorithm For Solving The Resource Constrained
Project Scheduling Problem (RCPSP)., Revista EIA, 87–100.
Rommelfanger, H. (1990), Fulpal - An Interactive Method for Solving (Multiobjective)
Fuzzy Linear Programming Problems, in Slowinski, R. & Teghem, J. (eds.), Stochastic
Versus Fuzzy Approaches to Multiobjective Mathematical Programming under Uncer-
tainty, Springer Netherlands, volume 6 of Theory and Decision Library, ISBN 978-94-
010-7449-0, pp. 279–299, doi:10.1007/978-94-009-2111-5{\_}14.
Sabuncuoglu, I. & Goren, S. (2009), Hedging production schedules against uncertainty in
manufacturing environment with a review of robustness and stability research, Interna-
tional Journal of Computer Integrated Manufacturing, 22(2), 138–157.
Sahinoglu, Z. (2008), Ultra-Wideband Positioning Systems, Technical report, Mitsubishi
Electric Research Laboratories, Massachusetts.
Saidi, K. S.; Haas, C. T. & Balli, N. a. (2002), The Value of Handheld Computers in
Construction, in Proceedings of the 19th International Symposium on Automation and
Robotics in Construction, IAARC, Washington, DC, USA.
Saidi, K. S.; Teizer, J.; Franaszek, M. & Lytle, A. M. (2011), Static and dynamic per-
formance evaluation of a commercially-available ultra wideband tracking system, Au-
tomation in Construction, 20(5), 519–530, ISSN 09265805, doi:10.1016/j.autcon.2010.
11.018.
Salem, O. & Zimmer, E. (2005), Application of lean manufacturing principles to con-
struction, Lean construction journal, 2(2), 51–54.
Salewski, F.; Schirmer, A. & Drexl, A. (1997), Project scheduling under resource
and mode identity constraints: Model, complexity, methods, and application, Eu-
ropean Journal of Operational Research, 102(1), 88–110, ISSN 03772217, doi:
10.1016/S0377-2217(96)00219-6.
190 10 References
Silpa-Anan, C. & Hartley, R. (2008), Optimised KD-trees for fast image descriptor match-
ing, Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference
on, 1–8, ISSN 1063-6919, doi:10.1109/CVPR.2008.4587638.
191
Špačková, O. & Straub, D. (2013), Dynamic Bayesian Network for Probabilistic Mod-
eling of Tunnel Excavation Processes, Computer-Aided Civil and Infrastructure Engi-
neering, 28(1), 1–21, ISSN 10939687, doi:10.1111/j.1467-8667.2012.00759.x.
Sprecher, A.; Kolisch, R. & Drexl, A. (1995), Semi-active, active, and non-delay sched-
ules for the resource-constrained project scheduling problem, European Journal of Op-
erational Research, 80(5), 94–102, doi:10.1016/0377-2217(93)E0294-8.
Stilla, U.; Tuttas, S.; Braun, A. & Borrmann, A. (2015), Baufortschrittskon-
trolle basierend auf photogrammetrischen Punktwolken und 4D-
Gebäudeinformationsmodellen (BIM), in Hanke, K. & Weinold, T. (eds.), Proceedings
der 18. Internationaler Geodätischen Woche, ISBN 9783879075546, pp. 157–163.
Su, Y. Y.; Hashash, Y. M. a. & Liu, L. Y. (2006), Integration of Construction As-Built
Data Via Laser Scanning with Geotechnical Monitoring of Urban Excavation, Journal
of Construction Engineering and Management, 132(12), 1234–1241, ISSN 0733-9364,
doi:10.1061/(ASCE)0733-9364(2006)132:12(1234).
Szczesny, K. & König, M. (2015), Reactive scheduling based on actual logistics data
by applying simulation-based optimization, Visualization in Engineering, 3(10), ISSN
2213-7459, doi:10.1186/s40327-015-0020-8.
Szczesny, K.; König, M.; Laußat, L. & Helmus, M. (2013), Integration of Uncertain Real-
Time Logistics Data for Reactive Scheduling using Fuzzy Set Theory, in Proceedings
of the 30th International Symposium of Automation and Robotics in Construction and
Mining (ISARC), pp. 691–698.
192 10 References
Tormos, P. & Lova, A. (2001), A Competitive Heuristic Solution Technique for Resource-
Constrained Project Scheduling, Annals of Operations Research, 102(1-4), 65–81,
ISSN 02545330, doi:10.1023/A:1010997814183.
Trevor, A. J. B.; Gedikli, S.; Rusu, R. B. & Christensen, H. I. (2013), Efficient organized
point cloud segmentation with connected components, in Proceedings of Semantic Per-
ception Mapping and Exploration.
Triggs, B.; McLauchlan, P. F.; Hartley, R. I. & Fitzgibbon, A. W. (2000), Bundle
Adjustment - A Modern Synthesis, in Triggs, B.; Zisserman, A. & Szeliski, R.
(eds.), Vision Algorithms: Theory and Practice, Springer, pp. 298–372, doi:10.1007/
3-540-44480-7{\_}21.
Tuttas, S.; Braun, A.; Borrmann, A. & Stilla, U. (2014), Comparison of Photogram-
metric Point Clouds with BIM Building Elements for Construction Progress Moni-
toring, in The International Archives of Photogrammetry, Remote Sensing and Spa-
tial Information Sciences, volume XL-3, ISSN 2194-9034, pp. 341–345, doi:10.5194/
isprsarchives-XL-3-341-2014.
Tuttas, S.; Braun, A.; Borrmann, A. & Stilla, U. (2015), Validation of Bim Compo-
nents By Photogrammetric Point Clouds for Construction Site Monitoring, in ISPRS
Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, volume
II-3/W4, ISSN 2194-9050, pp. 231–237, doi:10.5194/isprsannals-II-3-W4-231-2015.
Ullman, S. (1979), The Interpretation of Structure from Motion, Proceedings of the Royal
Society B: Biological Sciences, 203, 405–426, ISSN 0962-8452, doi:10.1098/rspb.
1979.0006.
Valls, V. & Ballestín, F. (2002), A Hybrid Genetic Algorithm for the RCPSP with the
Peak Crossover Operator, in Proceedings of 8th International Workshop on Project
Management and Scheduling.
Valls, V.; Ballestín, F. & Quintanilla, S. (2004), A population-based approach to the
resource-constrained project scheduling problem, Annals of Operations Research,
131(1-4), 305–324, ISSN 02545330, doi:10.1023/B:ANOR.0000039524.09792.c9.
Valls, V.; Ballestin, F. & Quintanilla, S. (2005), Justification and RCPSP: A technique
that pays, European Journal of Operational Research, 165(2), 375–386, doi:10.1016/j.
ejor.2004.04.008.
Valls, V.; Ballestín, F. & Quintanilla, S. (2008), A hybrid genetic algorithm for the
resource-constrained project scheduling problem, European Journal of Operational
Research, 185(2), 495–508, ISSN 03772217, doi:10.1016/j.ejor.2006.12.033.
Valls, V.; Quintanilla, S. & Ballestin, F. (2001), An evolutionary approach to the resource-
constrained project scheduling problem, in Proceedings of the 4th Metaheuristics In-
ternational Conference (MIC), pp. 217–220.
194 10 References
Wu, I.-C. C.; Borrmann, A.; Beißert, U.; König, M. & Rank, E. (2010), Bridge construc-
tion schedule generation with pattern-based construction methods and constraint-based
simulation, Advanced Engineering Informatics, 24(4), 379–388, ISSN 14740346, doi:
10.1016/j.aei.2010.07.002.
Wübbeler, G.; Krystek, M. & Elster, C. (2008), Evaluation of measurement uncertainty
and its numerical calculation by a Monte Carlo method, Measurement Science and
Technology, 19(8), 084009, ISSN 0957-0233, doi:10.1088/0957-0233/19/8/084009.
Yagiz, S.; Gokceoglu, C.; Sezer, E. & Iplikci, S. (2009), Application of two non-linear
prediction tools to the estimation of tunnel boring machine performance, Engineering
Applications of Artificial Intelligence, 22(4-5), 808–814, ISSN 09521976, doi:10.1016/
j.engappai.2009.03.007.
Yang, J.; Arif, O.; Vela, P. A.; Teizer, J. & Shi, Z. (2010), Tracking multiple workers
on construction sites using video cameras, Advanced Engineering Informatics, 24(4),
428–434, ISSN 14740346, doi:10.1016/j.aei.2010.06.008.
Yang, J.; Vela, P. A.; Shi, Z. & Teizer, J. (2009), Probabilistic Multiple People Tracking
through Complex Situations Multiple People Tracking, in Proceedings of the 11th IEEE
International Workshop on Performance Evaluation of Tracking and Surveillance, pp.
79–86.
Yang, J.; Vela, P. a.; Teizer, J. & Shi, Z. K. (2011), Vision-based crane tracking for un-
derstanding construction activity, 2011 ASCE International Workshop on Computing in
Civil Engineering, 28(February), 258–265, ISSN 0887-3801, doi:10.1061/41182(416)
32.
Yang, Z. Y. & Wang, Z. F. (2010), Comparison between AON and AOA network dia-
grams, in Proceedings of the IEEE 17th International Conference on Industrial En-
gineering and Engineering Management, ISBN 9781424464814, pp. 1507–1509, doi:
10.1109/ICIEEM.2010.5646036.
Yates, F. (1934), Tables Involving Small Numbers and the Chi-Squared Test, Supplement
to the Journal of the Royal Statistical Society, 1(2), 217–235.
Yu, P. L. & Leitmann, G. (1974), Compromise solutions, domination structures, and
Salukvadze’s solution, Journal of Optimization Theory and Applications, 13(3), 362–
378, ISSN 00223239, doi:10.1007/BF00934871.
Zadeh, L. A. (1978), Fuzzy sets as a basis for a theory of possibility, Fuzzy sets and
systems, 1, 3–28.
Zhang, H.; Li, H. & Tam, C. (2006), Permutation-based particle swarm optimization for
resource-constrained project scheduling, Journal of computing in civil engineering, 20,
141–149.
Zhang, H.; Li, X.; Li, H. & Huang, F. (2005a), Particle swarm optimization-based
schemes for resource-constrained project scheduling, Automation in Construction,
14(3), 393–404, ISSN 09265805, doi:10.1016/j.autcon.2004.08.006.
196 10 References
Zhang, H.; Tam, C. & Li, H. (2005b), Activity object-oriented simulation strategy for
modeling construction operations, Journal of Computing in Civil engineering, 19(3),
313–322.
Zheng, Y.-J. (2015), Water wave optimization: A new nature-inspired metaheuristic,
Computers & Operations Research, 55, 1–11, ISSN 03050548, doi:10.1016/j.cor.2014.
10.008.
List of Figures 197
List of Figures
1.1 Flow chart of the optimization-simulation cycle. The process starts with
an initial plan that undergoes a simulation and optimization process. Dur-
ing the execution of the plan, data is acquired about the progress. Unless
the plan execution is done, the performance factors can require updating,
which causes another simulation and optimization run, updating the plans
respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Scheme of a Resource Constrained Project Scheduling Problem (RCPSP).
Rectangles represent activities, while their fill color and hatching indi-
cates the resource that is required. The hexagons represent required ma-
terials while the rounded rectangles represent resources. The parallelo-
grams represent geometric spaces. Dotted arrows indicate usages, while
solid line arrows represent precedence constraints. . . . . . . . . . . . . . 13
2.1 Flowchart of the reactive modeling and simulation cycle. The real system
is modeled as a collection of process models, which take performance
factors of involved resources into account. The performance factors are
estimated and updated by monitoring the real system. The real system
however is subject to uncertainty which in turn affects the resulting per-
formance factors. The process models are combined into a project model
(See Section 2.6), which is then simulated to generate schedules for the
real system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Shape change of beta distribution probability density function when ma-
nipulating the shape parameters α (left) and β (right). . . . . . . . . . . . 22
2.3 Change of truncated normal distribution probability density function when
manipulating the parameters µ (left) and σ (right). . . . . . . . . . . . . . 23
2.4 Change of truncated gamma distribution probability density function when
manipulating the parameters k (left) and θ (right). . . . . . . . . . . . . . 24
2.5 Change of truncated Weibull distribution probability density function when
manipulating the parameters k (left) and λ (right). . . . . . . . . . . . . . 25
2.6 Sample data of four probability distributions. Beta distribution α=10, β=3
(Top left), Normal distribution µ=0.5, σ=0.1 (Top right), Gamma distri-
bution k=7, θ=1 (Bottom left), Weibull distribution k=5, λ=1 (Bottom
right). The plots show 10, 000 samples per distribution. . . . . . . . . . . 26
198 List of Figures
2.7 Bored pile wall example illustration. The drilling rig is positioned above
the drill location (a), then the rig adjusts to be exactly upright (b), the
hole is drilled (c), and then filled with concrete (d). This illustration was
inspired by https://de.wikipedia.org/wiki/Datei:Foundation_pile_scheme.
svg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.8 Bored pile wall example as a block diagram in the notation for the General
Purpose Systems Simulation (GPSS). The call element on the top left
outputs all the piles that need to be produced one by one. following the
arrows, the pile is then produced and is finished once the pile instance
reaches node 15, which terminates the production of that pile. . . . . . . . 32
2.9 Bored Pile Wall Example as a block diagram in the notation for the Gen-
eral Purpose Systems Simulation (GPSS) with two drilling rigs. The call
element on top outputs each pile that needs to be produced one by one. In
node 2 a drilling rig is selected for each pile, after which the respective
rig is used for production, creating the two parallel streams for seizure of
the rig. Once the drilling is seized, the streams merge again, producing
the pile, and eventually releasing the seized rig. Eventually each pile’s
production terminates in node 22. . . . . . . . . . . . . . . . . . . . . . 33
2.10 Bored pile wall example as an Activity Cycle Diagram (ACD). This dia-
gram shows the flow of the resourced as differently colored arrows. First
a hole is selected in the "next hole" idle state, going into the "DRILL"
active state, requiring the drilling rig to be in the idle state "ready". Once
drilling is done, the drilling rig can move to the idle state "done 1", while
the drilled hole can move to the idle state "done 3". Then the drilling rig
enters the "MOVE" active state, freeing the drilled hole, which goes into
the "done 4" state. The drilling rig moves into the "done 2" state, being
ready to be adjusted for the next hole. The drilled hole, can now be filled
with concrete, hence enter the "POUR CONCRETE" active state, given
the cement truck is not currently filling another hole, hence is in the "wait"
state. Once the hole is filled, the hole resource can proceed to the next hole. 34
2.11 Bored pile wall example as a CYCLONE diagram. The colors of the
arrows indicate the resource cycle, same as in Figure 2.10. The function
element "holes 1" outputs the piles that need to be produced. the combi-
state "DRILL" is then executed given the drilling rig is available. Once the
drilling is finished the drilling rig moves away in the "MOVE" active state
and adjusts in the "ADJUST" active state. The hole goes into the "holes
2" queue element, where the holes queue up, waiting for the cement truck.
The cement truck moves to the "truck" queue element when finished and
then becomes available for the combi-state "POUR CONCRETE". Once
a hole is filled with concrete, it moves to a counter element and proceeds
to the "piles done" queue element. . . . . . . . . . . . . . . . . . . . . . 36
List of Figures 199
3.1 Bored pile wall project example illustration viewed from above. A rect-
angular pit is to be excavated. To prevent earth from caving in, two sides
of the pit have to be secured using anchored bored pile walls. . . . . . . . 51
3.2 Plot of 50 random points and their Pareto frontier illustrated by the thick
red line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3 Plot of 10 random points with the light blue star at (0.2; 0.5) indicating
the compromise point. The thin blue lines indicate the minima for each
objective, the red star in the lower left is the utopia point. . . . . . . . . . 55
3.4 Types of precedence relationships. a) Finish-Start (FS), b) Finish-Finish
(FF), c) Start-Finish (SF), d) Start-Start (SS). . . . . . . . . . . . . . . . 57
3.5 Bored pile wall project example precedence constraints. The two bored
pile walls (B1 and B2) need to be finished before the first layer of ex-
cavation (E11 and E12), after which the pile walls can be anchored (A1
and A2) followed by the second layer of excavation (E21 and E22). Fur-
thermore excavation activities E3 and E4 need to be performed. When all
excavation is finished the foundation (F) can be produced. . . . . . . . . . 58
3.6 Bored pile wall project example solution. . . . . . . . . . . . . . . . . . 59
3.7 Bored pile wall project Example after the forward pass of the critical path
method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.8 Bored pile wall project example after the backward pass of the critical
path method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.9 Visualization of critical path and total float times for the example in Figure
3.5. The gray bars indicate total float times, while the blue activities are
critical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.10 Representation of an activity in the PERT chart notation. . . . . . . . . . 63
3.11 Anchored bored pile wall project example solution in PERT notation. The
critical path, consisting of all activities with zero float time (B2 → E12 →
A2 → E22 → F), is highlighted in blue. The corresponding Gantt Chart
is shown in Figure 3.9. . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.12 Bored pile wall project example with precedence and resource constraints. 65
3.13 Bored pile wall project example precedence and resource constraints. . . 66
3.14 Scheme of a Resource Constrained Project Scheduling Problem (RCPSP). 67
3.15 Bored pile wall project example resource usage. . . . . . . . . . . . . . . 68
3.16 Bored pile wall project example solution with resource constraints. . . . . 69
3.17 Bored pile wall project example resource usage. . . . . . . . . . . . . . . 69
3.18 Bored pile wall project example solution with resource constraints. . . . . 70
3.19 Bored pile wall project example resource usage. . . . . . . . . . . . . . . 70
3.20 Illustration of global and local left shift operations. Left shifting E3 to t1
is a local left shift (Labeled l), while shifting E4 to t3 is a global left shift
(Labeled g). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
List of Figures 201
3.21 Feasible schedule for anchored bored pile wall project example. . . . . . 72
3.22 Semi-active schedule for anchored bored pile wall project example. . . . 72
3.23 Non-delay schedule for anchored bored pile wall project example. . . . . 73
3.24 Unit time duration transformation of the schedule in Figure 3.23 . . . . . 73
3.25 Active schedule for anchored bored pile wall project example. . . . . . . 73
3.26 Priorities assigned to each activity of the anchored bored pile wall project
example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.27 Schedule and resource profile created by the serial schedule generation
scheme based on the priorities assigned in Figure 3.26. The critical path
is highlighted in blue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.28 Schedule and resource profile created by the parallel schedule generation
scheme based on the priorities assigned in Figure 3.26. The critical path
is highlighted in blue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.29 Comparison of performance of the serial and parallel schedule generation
schemes with random problems with up to 1000 activities. . . . . . . . . 79
3.30 Flowchart of simulation concept by Beißert et al. (2010). . . . . . . . . . 81
3.31 Schedule from Figure 3.27 with sequence enforcement constraints added
by the Dori-Algorithm depicted as thick red arrows. . . . . . . . . . . . . 82
3.32 Schedule resulting from backward simulation run according to the Dori-
Algorithm based on the sequence enforcement constraints in Figure 3.31. 83
3.33 Float times and critical path resulting from Dori-Algorithm for the sched-
ule in Figure 3.31. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.34 Simplified bored pile wall project example with priorities on the right
inside of each activity’s box. . . . . . . . . . . . . . . . . . . . . . . . . 89
3.35 Example schedule for simplified bored pile wall project example in Figure
3.34. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.36 Example schedule from Figure 3.35 after swapping E1 and E2 using the
serial schedule generation scheme. . . . . . . . . . . . . . . . . . . . . . 90
3.37 Schedule from Figure 3.35 after additionally swapping B1 and B2. . . . . 90
3.38 Example schedule from Figure 3.34 after swapping B1 and B2 using the
serial schedule generation scheme. . . . . . . . . . . . . . . . . . . . . . 91
3.39 Example schedule from Figure 3.34 after swapping B1 and B2 using the
parallel schedule generation scheme. . . . . . . . . . . . . . . . . . . . . 91
3.40 Simplified bored pile wall and sheet pile wall project example with prior-
ities on the right inside of each activity’s box. . . . . . . . . . . . . . . . 92
3.41 Schedule for the example in Figure 3.40. . . . . . . . . . . . . . . . . . . 92
3.42 Schedule for the example in Figure 3.40 after swapping E1 and E2, when
using the serial schedule generation scheme. . . . . . . . . . . . . . . . . 92
202 List of Figures
3.43 Tree created from swaps for a problem with 5 activities. Each node con-
tains a vector of priorities. One pair of priorities is swapped along each
arc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.1 Search Space generated by the pseudo-serial schedule generation scheme
for a two activity problem. Each axis corresponds the the priority of one
activity. The fractional parts are priority values, the integer parts are delays.100
4.2 The example problem from Section 3.4.8.8. Five activities using three
different resource types. . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3 The only schedule that can be generated by the parallel schedule genera-
tion scheme for the example in Figure 4.2. . . . . . . . . . . . . . . . . 101
4.4 The optimal solution for the problem in Figure 4.2. Compared to the
solution in Figure 4.3 this is not a non-delay schedule, but requires activity
E2 to be delayed once, which is illustrated by a red double arrow. . . . . . 101
4.5 A solution for the problem in Figure 4.2 that delays activity E2 while E2
has a lower priority value than E1. This schedule cannot be generated
by either of the serial and parallel schedule generation schemes. It can
however be generated by the pseudo-serial schedule generation scheme. . 102
4.6 A solution for the problem in Figure 4.2 that delays activity E2 twice,
while E1 has a lower priority value than E2. This schedule cannot be gen-
erated by either of the serial and parallel schedule generation schemes. It
can however be generated by the pseudo-serial schedule generation scheme.102
4.7 Convergence of serial (SSGS), parallel (PSGS) and pseudo-serial sched-
ule generation schemes on the PSPLIB 30 activity benchmark set. The
Water Wave Optimization (WWO) algorithm was run using all 3 schedul-
ing schemes. In case of the pseudo-serial schedule generation scheme,
the initial population vi was varied, illustrating the transition between the
serial and parallel schedule generation schemes. . . . . . . . . . . . . . . 103
4.8 Surface of parameter values. Horizontal axes show population size np and
number of iterations i. Vertical axis shows the mean error on the PSPLIB
j30 problem set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.9 Example Schedule to illustrate local optimality. Rectangles represent ac-
tivities, fill colors represent resources, and arrows represent precedence
constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.10 The schedule from Figure 4.9 after swapping activities B and D. The swap
increases the makespan of the schedule. . . . . . . . . . . . . . . . . . . 107
4.11 The schedule from Figure 4.9 after swapping activities A and C. The swap
increases the makespan of the schedule. . . . . . . . . . . . . . . . . . . 107
4.12 The schedule from Figure 4.9 after swapping activities A and C, as well
as B and D. Applying both swaps decreases the makespan of the schedule. 108
List of Figures 203
4.13 Search tree of schedules based on swaps and delays generated using Algo-
rithm 5. The numbers in the upper right corner of each rectangle indicates
the node index. The top schedule is the initial schedule from Figure 4.9.
The schedule is changed by either a swap or a delay at each edge. Dou-
ble arrowheads indicate delays. The red rectangle indicates the optimal
solution for the problem. As the depiction contains one undirected cycle
along nodes 1,2,4, and 6, it is not strictly a tree. However, node 6 would
practically only be reached through either node 2 or 4. . . . . . . . . . . 110
4.14 The schedule from Figure 4.9 after either delaying activity B, assuming
the priority value for D is higher than that for B, or after swapping activi-
ties B and D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.15 Uncertain durations of the 30 individual activities for the j30u problem
with instance 1 and parameter 1. Each activity is delayed using a beta
distributed random variable. The horizontal axes show the duration, the
vertical axes the probability. Activities are arranged from left to right,
then top to bottom. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.16 Schedule for the j30u problem with instance 1 and parameter 1, as shown
in Figure 4.15, using the default activity order with the parallel schedule
generation scheme. Each plot corresponds to one activity and shows the
probability of an activity being active at a certain time. The plots are
based on 100, 000 runs. Activities are arranged from left to right, then top
to bottom. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.17 Dependency graph of the 30 activities for the j30u problem with instance
1 and parameter 1. The numbers in parentheses are the minimum dura-
tions. Activities 1 and 32 are added source and sink activities with dura-
tion 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.18 Uncertain duration of a schedule for the j30u problem with instance 1
and parameter 1, as shown in Figure 4.15, using the default activity order
with the parallel schedule generation scheme. The vertical axis shows the
number of occurrences out of 1, 000, 000 runs. . . . . . . . . . . . . . . . 121
5.1 Matching of image features. a) and b) show two input images. The blue
dashed lines connect matching features in c) and d). . . . . . . . . . . . . 127
5.2 The location of a feature in a single two dimensional image does not re-
veal its location in three dimensional space. . . . . . . . . . . . . . . . . 128
5.3 The locations of a feature in two two dimensional images reveal its true
location, given the locations and parameters of both cameras are known. . 129
5.4 Example for two features in three images. . . . . . . . . . . . . . . . . . 129
6.1 Laser scanning an excavation pit. The laser scanner is placed in the right
of the image, illustrated by a gray square. The blue area shows the field
of view of the laser scanner. Due to the shape of the pit, the scanner can
hardly capture the entire soil surface. . . . . . . . . . . . . . . . . . . . . 135
204 List of Figures
6.2 Laser scanning an excavation pit. The laser scanner is placed in the right
of the image, illustrated by a gray square. The blue area shows the field
of view of the laser scanner. Due to the shape of the pit and an excavator
occluding parts of the area, the scanner can hardly capture the entire soil
surface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.3 The operator’s perspective, looking into the excavation pit, while taking
photos for photogrammetric point cloud generation. The ten blue rect-
angles represent the photos taken by the operator. The photos need to
overlap such that they share common features. . . . . . . . . . . . . . . . 136
6.4 Taking photos of an excavation pit for photgrammetric point cloud gen-
eration. The darker area in the center is the excavation pit. Each blue
triangle shows the orientation of a photo that was taken. The operator
moves around the pit taking photos of all currently visible areas of the pit. 137
6.5 Cutting plane of excavation pit while taking photos for photogrammet-
ric point cloud generation. Each blue triangle shows the orientation of a
photo that was taken. At each location the operator takes photos of all
currently visible areas of the pit. . . . . . . . . . . . . . . . . . . . . . . 137
6.6 Camera positions of photos taken on an actual construction site. The cir-
cular area is the excavation pit and each pyramid represents a photo that
was taken. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.7 Point cloud generated from photographs illustrated in Figure 6.6. . . . . . 139
6.8 Raw unprocessed point cloud. Structures outside the excavation area have
been included into the point cloud. . . . . . . . . . . . . . . . . . . . . . 139
6.9 Vertical histogram of point cloud. The graph shows the number of point
on the different elevations in the point cloud. The bottom layer within the
point cloud is clearly visible as the spike at around an elevation of 30. The
surrounding ground layer can be estimated to be at the left of the graph
where the histogram flats out. . . . . . . . . . . . . . . . . . . . . . . . . 140
6.10 The point cloud from Figure 6.8 after cleaning. Points outside the exca-
vation area have been removed. . . . . . . . . . . . . . . . . . . . . . . . 141
6.11 The convex hull of an u-shaped excavation pit would have a much higher
volume than the excavation pit. The white area is the actual excavation
pit, the orange area is the convex hull. . . . . . . . . . . . . . . . . . . . 141
6.12 Convex hull of the point cloud in Figure 6.10. The hull was created using
the quickhull algorithm by Barber et al. (1996). . . . . . . . . . . . . . . 142
6.13 Meshed surface of the point cloud in Figure 6.10. The surface was gener-
ated using the Poisson surface reconstruction algorithm by Kazhdan et al.
(2006). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.14 Meshed surface of the point cloud in Figure 6.10. The surface was gener-
ated using the the ball pivoting algorithm by Bernardini et al. (1999). The
red circle shows a hole that was left open. . . . . . . . . . . . . . . . . . 142
List of Figures 205
6.15 The point cloud from 6.10 after all vertical duplicate points have been
removed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.16 The point cloud from Figure 6.15 after refinement to fill holes. . . . . . . 144
6.17 The point cloud from Figure 6.16 after resampling to reduce the number
of points to be evenly distributed. . . . . . . . . . . . . . . . . . . . . . . 144
6.18 Overview of the system combining the presented photogrammetry ap-
proach with the video analysis system by Ogunmakin et al. (2013). . . . . 145
6.19 Dump truck state estimates for a video segment of 6 minutes duration and
the activity states in a pie chart (Bügler et al., 2016). . . . . . . . . . . . . 146
7.1 Flowchart of the overall concept of the reactive scheduling approach. An
initial schedule is built on the basis of a model of the construction project.
Once construction has started, real-time data is acquired. If construction
is not yet finished, the model is updated and it is verified whether the float
time of an activity was exceeded. If that is the case an updated schedule
is generated. This process is repeated until construction has finished. . . . 149
7.2 Example of a simple construction schedule. The excavation activity A
requires no shared resources, while the shoring activities B through E
share the same resource, of which only one instance is available. Parallel
execution of any of the activities B, C, D, and E is hence prohibited. . . . 150
7.3 The schedule from Figure 7.2 after a delay of activity A has been ob-
served. The delay propagated through the remaining schedule because
the float time of activity A has been exceeded. Activities D and E are now
scheduled to be executed later. . . . . . . . . . . . . . . . . . . . . . . . 151
7.4 The schedule from Figure 7.2 after a delay of activity A has been ob-
served, and since the float time was exceeded, a new schedule was gener-
ated. Now activity E is executed during the delay of activity A. . . . . . 151
7.5 The resource profile for the green resource of the schedule in Figure 7.2. . 152
7.6 The resource profile for the green resource of the schedule in Figure 7.3. . 152
7.7 The resource profile for the green resource of the schedule in Figure 7.4. . 152
8.1 Point cloud of the office building excavation site. . . . . . . . . . . . . . 155
8.2 Time series of point clouds of the office building excavation site. . . . . . 156
8.3 Volumes of the series of point clouds in Figure 8.2. . . . . . . . . . . . . 156
8.4 Performance factors calculated based on the volume calculations in Fig-
ure 8.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.5 Beta distribution estimated based on the performance factors in Figure
8.4 with α = 1.4427 and β = 2.7839. The mean performance was
1227.5m3 /day. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.6 Point cloud of the underground parking garage excavation site. Lifting
soil out of the pit slowed down the excavation progress. . . . . . . . . . . 159
8.7 Volume plot of the underground parking garage excavation site. . . . . . . 160
206 List of Figures
List of Tables
A List of Symbols
B Benchmark Results
• ILDTS: Iterative limited depth tree search algorithm presented in Section 4.3.
PSPLIB 30 PSPLIB 60
20 20
Deviation in %
Deviation in %
15 15
10 10
5 5
0 0
0 100 200 300 400 0 100 200 300 400
Problem Problem
Deviation in %
15 15
10 10
5 5
0 0
0 100 200 300 400 0 100 200 300 400 500 600
Problem Problem
Figure B.1: Deviation of best of 500, 000 random schedules from best known solutions
for PSPLIB problems.
Deviation in %
15 15
10 10
5 5
0 0
0 100 200 300 400 0 100 200 300 400
Problem Problem
Deviation in %
15 15
10 10
5 5
0 0
0 100 200 300 400 0 100 200 300 400
Problem Problem
Figure B.2: Deviation of algorithms from optima for the PSPLIB j30 problem set
214 B Benchmark Results
Table B.1: Ratio of optima of the PSPLIB j30 problems found by algorithms and the
respective mean deviations
B.1 PSPLIB Results 215
Table B.2: Ratio of optima of the PSPLIB j60 problems found by algorithms and the
respective mean deviations
Table B.3: Ratio of optima of the PSPLIB j90 problems found by algorithms and the
respective mean deviations
216 B Benchmark Results
Table B.4: Ratio of optima of the PSPLIB j120 problems found by algorithms and the
respective mean deviations
B.2 UPSPLIB Results 217
Table B.5: Best solutions of water wave optimization on the uncertainty-extended PSPLIB
j60u set for the different objective functions O1 to O8 . The result measures have been
obtained through 10,000 Monte Carlo runs of the final solution.
XXX
X Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
XX
O1 (min) 1.21 1.21 1.22 1.54 1.22 1.22 1.22 1.22
O2 (mean) 1.33 1.32 1.32 1.68 1.32 1.32 1.32 1.32
O3 (max) 1.49 1.47 1.46 1.85 1.46 1.47 1.46 1.47
O4 (stdev) 2.82 2.41 2.09 2.47 2.11 2.17 2.09 2.09
XXX
XXX Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
O1 (min) 1.46 1.46 1.49 2.36 1.48 1.53 1.56 1.49
O2 (mean) 1.68 1.59 1.60 2.62 1.62 1.67 1.66 1.62
O3 (max) 1.91 1.97 2.11 2.77 1.88 2.09 2.14 2.06
O4 (stdev) 12.79 9.44 7.04 8.30 5.72 9.87 8.01 6.24
XX
XXX Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
X
O1 (min) 1.09 1.08 1.09 1.09 1.09 1.09 1.08 1.09
O2 (mean) 1.16 1.15 1.15 1.15 1.16 1.15 1.16 1.16
O3 (max) 1.23 1.21 1.20 1.22 1.22 1.21 1.21 1.21
O4 (stdev) 1.48 1.57 1.48 1.56 1.51 1.56 1.54 1.58
Table B.8: Best solutions of water wave optimization on the uncertainty-extended PSPLIB
j90u set for the different objective functions O1 to O8 . The result measures have been
obtained through 10,000 Monte Carlo runs of the final solution.
XX
XXX Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XX
XX
O1 (min) 1.27 1.27 1.27 1.48 1.27 1.27 1.28 1.28
O2 (mean) 1.38 1.37 1.38 1.61 1.37 1.38 1.38 1.38
O3 (max) 1.53 1.51 1.52 1.79 1.51 1.52 1.52 1.52
O4 (stdev) 4.27 3.79 3.70 4.36 3.68 3.63 3.56 3.56
XXX
X Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
XX
O1 (min) 1.39 1.39 1.40 1.63 1.39 1.40 1.41 1.42
O2 (mean) 1.54 1.52 1.52 1.79 1.52 1.53 1.53 1.54
O3 (max) 1.73 1.70 1.71 1.99 1.70 1.71 1.71 1.72
O4 (stdev) 6.81 6.01 5.66 6.34 5.77 5.69 5.58 5.57
XX
XXX Objective
XXX O1 O2 O3 O4 O5 O6 O7 O8
Result XXX
X
O1 (min) 1.68 1.59 1.65 2.08 1.68 1.63 1.68 1.67
O2 (mean) 1.87 1.79 1.85 2.30 1.83 1.81 1.84 1.83
O3 (max) 2.24 2.16 2.14 2.56 2.07 2.14 2.26 2.29
O4 (stdev) 26.36 16.09 15.35 15.07 14.15 13.66 13.69 14.83