European Journal of Operational Research: Eva Vallada, Rubén Ruiz
European Journal of Operational Research: Eva Vallada, Rubén Ruiz
European Journal of Operational Research: Eva Vallada, Rubén Ruiz
a r t i c l e i n f o a b s t r a c t
Article history: In this work a genetic algorithm is presented for the unrelated parallel machine scheduling problem in
Received 25 June 2009 which machine and job sequence dependent setup times are considered. The proposed genetic algorithm
Accepted 4 January 2011 includes a fast local search and a local search enhanced crossover operator. Two versions of the algorithm
Available online 9 January 2011
are obtained after extensive calibrations using the Design of Experiments (DOE) approach. We review,
evaluate and compare the proposed algorithm against the best methods known from the literature.
Keywords: We also develop a benchmark of small and large instances to carry out the computational experiments.
Parallel machine
After an exhaustive computational and statistical analysis we can conclude that the proposed method
Scheduling
Makespan
shows an excellent performance overcoming the rest of the evaluated methods in a comprehensive
Setup times benchmark set of instances.
Ó 2011 Elsevier B.V. All rights reserved.
0377-2217/$ - see front matter Ó 2011 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2011.01.011
E. Vallada, R. Ruiz / European Journal of Operational Research 211 (2011) 612–622 613
problem with sequence dependent setup times (independent of 1; if job j precedes job k on machine i
X ijk ¼
the machine) and the objective to minimise the sum of weighted 0; otherwise
tardiness of the jobs. In Kurz and Askin (2001), the authors pro- C ij ¼ Completion time of job j at machine i
posed several heuristics and a genetic algorithm to minimise
C max ¼ Maximum completion time
makespan. Other heuristics for the same problem are those pro-
posed by Gendreau et al. (2001) and Hurink and Knust (2001). In The objective function is:
both cases the objective is to minimise the makespan and in the
latter case precedence constraints are also considered. In Eom min C max ; ð1Þ
et al. (2002) and Dunstall and Wirth (2005) heuristics are proposed And the constraints are:
for the family setup times case. In Tahar et al. (2006) a linear pro- X X
gramming approach is proposed where job splitting is also consid- X ijk ¼ 1; 8k 2 N; ð2Þ
i2M j2f0g[fN g
ered. Anghinolfi and Paolucci (2007) and Pfund et al. (2008) j–k
present heuristic and metaheuristic methods for the same prob-
lem, respectively. XX
X ijk 6 1; 8j 2 N; ð3Þ
The unrelated parallel machines case with sequence depen- i2M k2N
dent setup times has been less studied and only a few papers j–k
can be found in the literature. A tabu search algorithm is given X
in Logendran et al. (2007) for the weighted tardiness objective. X i0k 6 1; 8i 2 M; ð4Þ
Another heuristic for the unrelated parallel machine case with k2N
the objective to minimise weighted mean completion time is X
that proposed by Weng et al. (2001). Kim et al. (2002) proposed X ihj P X ijk ; 8j; k 2 N; j – k; 8i 2 M; ð5Þ
a simulated annealing method with the objective to minimise h2f0g[fNg
h–k;h–j
the total tardiness. In Kim et al. (2003) and Kim and Shin
(2003) a heuristic and tabu search algorithm were proposed with
the objective to minimise the total weighted tardiness and the C ik þ Vð1 X ijk Þ P C ij þ Sijk þ pik ; 8j 2 f0g [ fNg; 8k
maximum lateness, respectively. The same problem is also stud- 2 N; j – k; 8i 2 M; ð6Þ
ied in Chen (2005), Chen (2006) and Chen and Wu (2006) where
resource constraints are also considered, with the objective to C i0 ¼ 0; 8i 2 M; ð7Þ
minimise makespan, maximum tardiness and total tardiness,
respectively. In Rabadi et al. (2006) a heuristic for the unrelated C ij P 0; 8j 2 N; 8i 2 M; ð8Þ
machine case with the objective to minimise makespan is also
presented. Rocha de Paula et al. (2007) proposed a method based C max P C ij ; 8j 2 N; 8i 2 M; ð9Þ
on the VNS strategy for both cases, identical and unrelated par-
allel machines for the makespan objective. In Low (2005) and X ijk 2 f0; 1g; 8j 2 f0g [ fNg; 8k 2 N; j – k; 8i 2 M: ð10Þ
Armentano and Felizardo (2007) the authors proposed a simu-
lated annealing method and a GRASP algorithm, with the objec- The objective is to minimise the maximum completion time or
tive to minimise the total flowtime and the total tardiness, makespan. Constraint set (2) ensures that every job is assigned to
respectively. exactly one machine and has exactly one predecessor. Notice the
Regarding the exact methods, there are some papers available usage of dummy jobs 0 as Xi0k, i 2 M, k 2 N. With constraint set
in the literature for the parallel machine problem. However, most (3) we set the maximum number of successors of every job to
of them are able to solve instances with a few number of jobs one. Set (4) limits the number of successors of the dummy jobs to
and machines (more details in Allahverdi et al. (2008)). a maximum of one on each machine. With Set (5) we ensure that
In this paper, we deal with the unrelated parallel machine jobs are properly linked in machine: if a given job j is processed
scheduling problem in which machine and job sequence depen- on a given machine i, a predecessor h must exist on the same ma-
dent setup times are considered, i.e., the setup times depend on chine. Constraint set (6) is to control the completion times of the
both, the job sequence and the assigned machine. We evaluate jobs at the machines. Basically, if a job k is assigned to machine i
and compare some of the above methods available in the literature. after job j (i.e., Xijk = 1), its completion time Cik must be greater than
We also propose a genetic algorithm that shows excellent perfor- the completion time of j, Cij, plus the setup time between j and k and
mance for a large benchmark of instances. the processing time of k. If Xijk = 0, then the big constant V renders
the constraint redundant. Sets (7) and (8) define completion times
as 0 for dummy jobs and non-negative for regular jobs, respectively.
3. MIP mathematical model Set (9) defines the maximum completion time. Finally, set (10) de-
fines the binary variables. Therefore, in total, the model contains
In this section, we provide a Mixed Integer Programming (MIP) n2m binary variables, (n + 1)m + 1 continuous variables and
mathematical model for the unrelated parallel machine scheduling 2n2m + nm + 2n + 2m constraints. This MIP model will be used later
problem with sequence dependent setup times. Note that this in the computational experiments.
model is an adapted version of that proposed by Guinet (1993).
We first need some additional notation in order to simplify the 4. Proposed genetic algorithm
exposition of the model.
Genetic algorithms (GAs) are bio-inspired optimisation meth-
pij: processing time of job j, j 2 N at machine i, i 2 M. ods (Holland, 1975) that are widely used to solve scheduling prob-
Sijk: machine based sequence dependent setup time on machine lems (Goldberg, 1989). Generally, the input of the GA is a set of
i, i 2 M, when processing job k, k 2 N, after having processed job solutions called population of individuals that will be evaluated.
j, j 2 N. Once the evaluation of individuals is carried out, parents are se-
lected and a crossover mechanism is applied to obtain a new gen-
The model involves the following decision variables: eration of individuals (offspring). Moreover, a mutation scheme is
614 E. Vallada, R. Ruiz / European Journal of Operational Research 211 (2011) 612–622
also applied in order to introduce diversification into the popula- the MI heuristic is applied to the random individuals, that is, for
tion. The main features of the proposed genetic algorithm are the each random individual each job is inserted in every position of
application of a fast local search procedure and a local search en- every machine and finally the job is placed in the position that re-
hanced crossover operator. In the following subsections we report sults in the best makespan.
a detailed description about the developed GA. Regarding the selection mechanism, in the classical genetic
algorithms, tournament and ranking-like selection operators are
common. Such operators either require fitness and mapping calcu-
4.1. Representation of solutions, initialisation of the population and
lations or the population to be continuously sorted. In this work, a
selection operator
much simpler and faster selection scheme, called n-tournament, is
used (Ruiz and Allahverdi, 2007). In this case, according to a
The most commonly used solution representation for the paral-
parameter called ‘‘pressure’’, a given percentage of the population
lel machine scheduling problem is an array of jobs for each ma-
is randomly selected. The individual with the lowest makespan va-
chine that represents the processing order of the jobs assigned to
lue among the randomly selected percentage of individuals wins
that machine. This representation is complete in the sense that
the tournament and is finally selected. This results in a very fast
all feasible solutions of the MIP model presented in Section 3 can
selection operator, since for each individual the Cmax value can be
be represented (recall that Cmax is a regular criterion and therefore
directly used as a fitness value.
no machine should be left idle when capable of being occupied by a
job). The GA is formed by a population of Psize individuals, where
each individual consists of m arrays of jobs (one per machine). 4.2. Local search enhanced crossover and mutation operators
It is also common to randomly generate the initial population in
a genetic algorithm. However, a recent trend consists in including Once the selection is carried out and the parents have been se-
in the population some good individuals provided by some effec- lected, the crossover operator is applied according to a probability
tive heuristics. One of the best heuristics for the parallel machine Pc. There are several crossover operators proposed in the literature
scheduling problem is the Multiple Insertion (MI) heuristic pro- for scheduling problems. In general, the goal of the crossover oper-
posed by Kurz and Askin (2001). The MI heuristic starts by ordering ator is to generate two good individuals, called offspring, from the
the jobs according to a matrix with the processing and setup times. two selected progenitors. One of the most used crossover operators
After the initial solution, each job is inserted in every position of is the One Point Order Crossover adapted to the parallel machine
every machine and places the job in the position that results in case. Therefore, for each machine one point p is randomly selected
the lowest makespan. One individual of the population is obtained from parent 1 and jobs from the first position to the p position are
by means of the MI heuristic and the remaining ones are randomly copied to the first offspring. Jobs from the point p + 1 position to
generated. However, in order to obtain a good initial population, the end are copied to the second offspring. In Fig. 1, an example
for 12 jobs and two machines is given (setup times between jobs
are not shown for clarity). Two parents are shown and for each ma-
chine a point p is selected (Fig. 1(a)). Specifically, point p1 (machine
1) is set to 3 and point p2 (machine 2) is set to 4. Therefore, the first
offspring is formed with the jobs of parent 1 from position 1 to 3 in
machine 1 and jobs from position 1 to 4 in machine 2. The second
offspring will contain jobs of parent 1 from position 4 to the last
one in machine 1 and from position 5 to the last one in machine
2 (Fig. 1(b)). Then, jobs from the parent 2 which are not already
in the offspring are inserted in the same order. At this point, it is
easy to introduce a limited local search procedure in the crossover
operator. When missing jobs are inserted from the parent 2, they
are inserted in every position of the offspring at the same machine
and finally the job is placed in the position that results in the min-
imum completion time for that machine. In this way we obtain a
crossover operator with a limited local search. In Fig. 1(b), jobs
from parent 2 which are not already in the offspring will be in-
serted, that is, jobs 7, 3, 2, 1 of machine 1 from parent 1 are already Fig. 2. Example with 7 jobs and 2 machines for the IMI local search.
inserted in offspring 1, so they are not considered. Job 5 will be in-
serted in every position of machine 1 and finally will be placed in
chine 1, and the setup time between jobs 1 and 4 on machine 2. So
the best position (lowest completion time of the machine). In ma-
we can obtain the new completion time of the machines (Fig. 2(b)),
chine 2, jobs 10, 9, 12 and 11 will be inserted in every position of
with only 3 subtractions and 3 additions and therefore, we can ob-
the machine 2 and will be placed in the best position. In a similar
tain the new makespan value (20) which is better than the previ-
way, in offspring 2 jobs 7, 3, 2 and 1 will be inserted in every posi-
ous one. In general, for each insertion, the number of
tion of machine 1 and jobs 4, 6 and 8 will be inserted in every posi-
subtractions and additions will be four and four, respectively, if
tion of machine 2. In Fig. 1(c) we can see the offspring obtained
the job is not inserted in the first or last position. For insertions
after the application of the local search enhanced crossover opera-
in the first or last position, the number of subtractions and addi-
tor (remember that setup times are not represented). In Section 6,
tions will be three and three, respectively (there is not setup time
results after testing this local search enhanced crossover operator
at the beginning or the end of the machine sequence).
(LSEC) will be given.
In total, the proposed local search will contain the following
Once we obtain the offspring, a mutation operator can be ap-
number of steps (inter-machine insertions):
plied according to a probability (Pm). After short experiments test- XX
ing different mutation operators, the best performance was ni ðnl þ 1Þ;
obtained by the most usual in scheduling problems, that is, the i2M l2M
l–i
shift mutation. A machine is randomly selected and then a job is
also randomly selected and re-inserted to a different position ran- where ni and nl are the number of jobs assigned to machine i and l,
domly chosen in the same machine. respectively, that is, each job on machine i can be inserted in nl + 1
positions on machine l.
4.3. Local search and generational scheme During the local search procedure, we have to decide which
movements are accepted applying an acceptance criterion. In our
Local search procedures are widely used in genetic algorithms algorithm, local search is based on the previous IMI neighborhood,
as well as in other metaheuristic methods to improve solutions. so two machines are involved in the local search procedure. That is,
For the parallel machine scheduling problem with the objective the neighborhood is examined in the following way: for all the ma-
to minimise the makespan, it is possible to apply a really fast local chines, all the jobs assigned to this machine are inserted in all posi-
search procedure based on the insertion neighborhood, the most tions of all other machines. However, the examination of the
used in scheduling problems. We test the inter-machine insertion neighborhood is carried out between pairs of machines, that is,
neighborhood (IMI), which consists of, for all machines, insertion jobs assigned to machine 1 are inserted in all positions of machine
of all the jobs in every position of all the machines. In order to re- 2 and the acceptance criterion is applied. Then, jobs from machine
duce the computational effort needed by the local search, it is pos- 1 are inserted in all positions of machine 3 and so on. For every pair
sible to introduce a simple and very efficient speed up procedure: of machines, the following acceptance criterion based on the new
when a job is inserted, it is not necessary to evaluate the complete completion times of the machines after an insertion is analyzed:
sequence for obtaining the new completion time value of the ma-
chine. Let us give an example, we have 7 jobs and 2 machines Completion time of both machines is reduced: this is the ideal
(Fig. 2), grey blocks represent the setup times between the jobs. situation since if the movement is applied both machines
Remember that these setup times are both machine and job se- reduce the completion time. Obviously, the movement is
quence dependent. Moreover, these setup times differ between accepted in this case.
machines 2 and 1. The makespan value for the current shown solu- Completion time of one machine is reduced and completion
tion is 21 and a step of IMI local search is applied. For example, we time of the other machine is increased: in this case we have
have that job 1 is going to be inserted on the second position of to decide the acceptance of the movement. If the amount of
machine 2. To obtain the new completion time of the machines, time reduced on one machine is greater than the amount of
we only have to remove the processing time of job 1 and the setup time increased on the other machine and the makespan value
time between job 1 and 3 on machine 1. Then, if we want to insert is not increased, then the movement is accepted. Specifically:
job 1 on machine 2, we have to remove the setup time between – C i ; C 0i : completion time of machine i before and after the
jobs 2 and 4. Finally, we have to add the setup time between job insertion movement, respectively, i 2 M.
2 and job 1 on machine 2, the processing time of job 1 on machine – C l ; C 0l : completion time of machine l before and after the
2, that could be different than the processing time of job 1 on ma- insertion movement, respectively, l 2 M, l – i.
616 E. Vallada, R. Ruiz / European Journal of Operational Research 211 (2011) 612–622
– C max ; C 0max : makespan of the sequence before and after the netic algorithm where the initial population is obtained as it was
insertion movement, respectively. explained in Section 4.1 and new features are added in the follow-
Therefore, in this case the movement will be accepted when: ing way:
C 0i < C i ; i 2 M; ð11Þ
GAStd1: standard algorithm where the crossover operator is
and applied without the limited local search explained in Section 4.2
and there is no local search.
C 0l > C l ; l 2 M; ð12Þ GAStd2: in this case the crossover operator with the limited
local search is applied and there is no local search procedure.
and
GAStd3: in the third version, crossover operator without the
C i C 0i > C 0l C l ; i; l 2 M; i – l; ð13Þ limited local search and the fast local search procedure
explained in Section 4.3 are applied.
and GAStd4: the last version includes the crossover operator with
C 0max 6 C max ; ð14Þ the limited local search as well as the fast local search
procedure.
With expression (11), the new completion time of machine i
is reduced. Expression (12) shows that new completion time We can see in Table 1 a summary with the features of all the
of machine l is increased. Expression (13) compares the proposed versions. Notice that the objective is to start from a very
amount reduced vs. the amount increased and finally, expres- simple genetic algorithm and to add new features in order to im-
sion (14) tests the new makespan value. prove the effectiveness.
Completion time of both machines is increased: in this case the Regarding the parameters of the algorithms, all four versions
movement is not accepted. have the same parameter values (standard) that are summarized
in Table 2. These parameters will be later calibrated.
Moreover, the IMI local search is applied until local optima, that
is, if after a step of the local search one or more movements are ac-
cepted, which means an improvement, the IMI local search is ap- 4.5. Benchmark of instances
plied again from the beginning. As we can see, applying IMI until
local optima results in a very intensive and fine tuned local search. We tested all the variants of the proposed genetic algorithm un-
This is only possible with the speed-ups and accelerations pro- der a proposed benchmark of instances. The processing times are
posed. The local search procedure is applied according to a proba- uniformly distributed between 1 and 99 as it is common in the lit-
bility (Pls) to the best individual of the initial population and to the erature. Regarding the setup times, we generate 4 subsets where
offspring. The way to accept movements in the local search is an the setup times are uniformly distributed between 1–9, 1–49, 1–
important feature of the algorithm. We will see in Section 6 the ef- 99 and 1–124, respectively. In order to test the behaviour and
fect of the local search, that improves the results significantly. the sensitivity to the size of the instance of the different algo-
Moreover, different ways to accept movements were tested obtain- rithms, two sets of instances are generated. The first one, small in-
ing much worse results, so the idea to accept movements analyzing stances, with the following combinations of number of jobs and
the machines by pairs and when one machine reduces the comple- number of machines: n = {6, 8, 10, 12}, m = {2, 3, 4, 5}. The second
tion time and the other one increases the completion time, is an one, large instances, where the following values are tested:
innovative feature of the algorithm. n = {50, 100, 150, 200, 250} and m = {10, 15, 20, 25, 30}. We generate
Another aspect to consider is the way the generated offspring 10 replicates for each possible combination and in total we have
after selection, crossover and mutation are inserted into the popu- 640 small instances and 1000 large instances to test the algo-
lation. This is usually known as generational scheme. It is usual rithms, all of them are available from http://soa.iti.es.
that offspring directly replace the parents. In other GAs, this proce- Moreover, a different set of instances is generated for the cali-
dure is carried out only after having preserved the best individuals bration experiments. In this case, 200 instances are randomly gen-
from the last generation in order to avoid loosing the best solutions erated in the same way explained before. Only large instances are
(elitist GAs). Another approach is the so called steady state GAs generated for calibration experiments, two replicates for each
where the offspring do not replace the parents but different indi-
viduals from the population. In the genetic algorithms proposed
Table 1
in this work, the offspring are accepted into the population only Variants of the genetic algorithm proposed.
if they are better than the worst individuals of the population
Version Local search enhanced crossover Local search
and if at the same time are unique, i.e., there are no other identical
individuals already in the population. Otherwise they are rejected. GAStd1 No No
GAStd2 Yes No
As a result, population steadily evolves to better average makespan
GAStd3 No Yes
values while at the same time it contains different solutions, which GAStd4 Yes Yes
help in maintaining diversity and in avoiding premature conver-
gence to a dominant, sub-optimal individual. This kind of genera-
tional scheme was firstly applied in Ruiz et al. (2006) providing
very good results. Table 2
Standard parameter values for the four variants of the
genetic algorithm.
4.4. Variants of the proposed genetic algorithm
Parameter Value
First, we test a standard genetic algorithm that will be improved Population size (Psize) 50
by adding new features. The different versions of the proposed Selection pressure (Pressure) 30
algorithm share the features explained in the above sections. Dif- Probability of crossover (Pc) 0.5
Probability of mutation (Pm) 0.2
ferences among them are mainly related to the crossover operator Probability of local search (Pls) 0.4
and the fast local search procedure. Specifically, we start with a ge-
E. Vallada, R. Ruiz / European Journal of Operational Research 211 (2011) 612–622 617
combination set of n m and setup times. These instances are also The first interesting outcome is the great improvement of the algo-
available from the same website. This is an important aspect since rithm as the new features are added, especially when the fast local
calibrating over final test instances could bias the results. By using search procedure is included. The second version of the algorithm
a separated calibration set of instances we ensure that the final re- (GAStd2) which incorporates the crossover operator with the lim-
sults are not skewed by overfitting issues. ited local search procedure improves the first version (GAStd1) by
almost a 5%. Regarding the fast local search, if the procedure is in-
4.6. Computational results for the variants of the proposed genetic cluded in GAstd1 obtaining the GAStd3 version, the results are
algorithm much better, specifically by a quite large 97%. Finally, the best re-
sults are obtained by the GAStd4 version which includes the cross-
A first experiment was carried out in order to check the features over operator with limited local search as well as the local search
of the proposed genetic algorithm. In Section 4.4 the fourth ver- procedure. This method improves the previous version by a 43%.
sions of the algorithm were explained from the most basic to the Let us stress that each algorithm is run during the same CPU time,
most advanced. We denote the different algorithms as GAStd1, therefore, the results are fully comparable.
GAStd2, GAStd3, GAStd4 as in Table 1. In order to validate the results, it is interesting to check wether
The GAs are coded in Delphi 2007 and run on computers with the previous differences in the RPD values are statistically signifi-
an Intel Core 2 Duo processor running at 2.4 Ghz with 2 GB of main cant. We apply an analysis of variance (ANOVA), (Montgomery,
memory. In order to evaluate all the variants, the benchmark of in- 2007), once the three hypotheses for this statistical test are
stances explained in Subsection 4.5 is used. The stopping criterion checked: normality, homocedasticity and independence of the
is set to a maximum elapsed CPU time of n (m/2) 30 ms. There- residuals. We can see in Fig. 3 the means plot with HSD Tukey
fore, the computational effort increases as the number of jobs intervals (a = 0.05). We can clearly see that there are statistically
and/or machines increases. significant differences between the average RPD values among
Regarding the response variable for the experiments, the aver- the variants. We can observe that the fourth version (GAStd4)
age Relative Percentage Deviation (RPD) is computed for each in- shows the best performance, that is, the genetic algorithm pro-
stance according to the following expression: posed will include the crossover operator with limited local search
(LSEC) as well as the local search procedure.
Methodsol Bestsol
Relative Percentage DeviationðRPDÞ ¼ 100;
Bestsol
ð15Þ 5. Calibration of the GASTd4 algorithm
where Bestsol is the best known solution, obtained after all the Calibration of algorithms is one of the most important steps in
experiments carried out throughout the paper, and Methodsol is order to obtain good results. In the previous Section, we proposed
the solution obtained with a given version of the algorithm. We an standard genetic algorithm that was improved by adding new
run five replicates of each algorithm. The results for the large in- features. After obtaining a good genetic algorithm (GAStd4), in this
stances are shown in Table 3 where the 40 instances of each Section, a first preliminary calibration experiment is carried out
n m group have been averaged (recall that the instance set con- where several values are tested for the different parameters. Final-
tains also different values for the setup times). Worst and best re- ly, a second more finely-tuned calibration is applied from the re-
sults for each n m set are in italics and bold face, respectively. sults obtained by the first one. Both calibration experiments are
carried out by means of a Design of Experiments (Montgomery,
2007).
Table 3 We use the benchmark of instances explained in Subsection 4.5
Average Relative Percentage Deviation (RPD) for variants of the proposed algorithm
for both calibration experiments. That is, 200 test instances for the
(large instances).
calibration experiments. Regarding the performance measure, we
Instance GAStd1 GAStd2 GAStd3 GAStd4 use again the Relative Percentage Deviation (RPD) following
50 10 25.98 25.39 16.84 13.45
50 15 22.69 22.14 15.51 15.15
50 20 20.98 21.16 15.42 14.55
26
50 25 21.05 20.22 13.74 14.18
50 30 24.71 24.23 17.01 14.79
100 10 31.67 29.86 18.02 10.81
Relative Percentage Deviation (RPD)
Fig. 3. Means plot and Tukey HSD intervals at the 95% confidence level for the
Average 23.69 22.93 14.81 12.07
versions of the standard genetic algorithm proposed (large instances).
618 E. Vallada, R. Ruiz / European Journal of Operational Research 211 (2011) 612–622
Table 6
Average Relative Percentage Deviation (RPD) for the proposed algorithms: setting of t to 10/30/50 in the stopping criterion only for metaheuristic methods (small instances).
Table 7
Average Relative Percentage Deviation (RPD) for the proposed algorithms: setting of t to 10/30/50 in the stopping criterion only for metaheuristic methods (large instances).
In order to validate the results, we apply again an statistical anal- Regarding the remaining methods, if we focus our attention on
ysis (ANOVA) as in previous experiments (GAK algorithm is re- the heuristic proposed by Rabadi et al. (2006), we can see that the
moved from the statistical analysis since is clearly worse than calibrated version (MetaC) improves the original one (Meta). More
the remaining ones). In Fig. 5 we can see the means plot for all t specifically, MetaC version is up to 16% better than Meta algorithm
values, on average, with HSD Tukey intervals (a = 0.05). In this run with the original values for the different parameters. Moreover,
case, we can observe that differences between GA1 and GA2 are we want to remark that all the versions of the proposed algorithm,
statistically significant (confidence intervals are not overlapped). even those not calibrated, obtain results much better than the
So we can state that GA2 is better than GA1, on average. Specifi- remaining ones. In this point, one can think that the good perfor-
cally, after an statistical analysis focused only on both methods mance of the proposed algorithm is exclusively due to the fast local
and not shown in the paper due to space restrictions, we can con- search procedure. In order to test it, a new experiment was carried
clude that GA2 is the best method when t = 30 and t = 50, by a out where the fast local search was applied to the MI heuristic by
39.8% and a 48.3%, respectively. However, when t = 10, the best Kurz and Askin (2001) and after this, iterations of the local search
method is GA1, specifically almost a 10% better than GA2. This out- starting from a random solution were run until the stopping crite-
come could be expected since when t = 10 the total amount of CPU rion was met. Results showed that the local search starting from
time is really small and we have to remark that local search is al- random solutions was not able to improve the solution obtained
ways applied in the GA2 algorithm. Therefore, when the amount of by the MI with the local search. Moreover, results reported by
time is really small, GA1 is the best method. the MI with the local search were really close to those obtained just
by the MI heuristic. Therefore, the good performance of the pro-
posed genetic algorithm is not only due to the calibration phase
50
Relative Percentage Deviation (RPD)
or the fast local search, but due to the innovative features included,
specially the crossover operator, and of course the fast local search
and the acceptance criterion of the fast local search.
40
As regards the rest of the methods, we want to remark the good
results obtained by the MI heuristic by Kurz and Askin (2001),
which overcomes the rest of the algorithms of the comparison,
30
including the GAK algorithm proposed by the authors in the same
paper. This outcome fits with the original paper where the perfor-
mance of the GAK was shown to be worse than the performance of
20 the MI heuristic when the size of the instance was slightly
increased.
Another interesting factor to study in the experiments is the
10 parameter t for the metaheuristic methods. Remember that all
metaheuristic methods are run setting the t value parameter to
10, 30 and 50. In this way, the CPU time for the stopping criterion
0 is increased and the results can be analyzed from the t parameter
GA2 GA1 MI MetaC Meta point of view. We apply an analysis of variance (ANOVA) as in the
Fig. 5. Means plot and Tukey HSD intervals at the 95% confidence level for the
previous experiments but in this case we focus our attention on the
evaluated methods (large instances). interaction between t and algorithm factors. We can see in Fig. 6
E. Vallada, R. Ruiz / European Journal of Operational Research 211 (2011) 612–622 621
Fig. 7. Means plot and Tukey HSD intervals at the 95% confidence level for the Acknowledgments
interaction between time (t) factor and algorithms GA1 and GA2 (large instances).
the means plot with HSD Tukey intervals (a = 0.05) for the interac- The authors are indebted to the referees and editor for the many
tion between t and algorithm factors for large instances (a very constructive comments that have significantly improved the paper.
similar picture is obtained for small ones). In this case, we observe This work is partially funded by the Spanish Ministry of Science
that the RPD value is much better as the t parameter increases. and Innovation, under the project ‘‘SMPA – Advanced Parallel Mul-
However, these differences are much smaller between t = 30 and tiobjective Sequencing: Practical and Theoretical Advances’’ with
t = 50 than between t = 10 and t = 30, which indicates that some reference DPI2008-03511/DPI. The authors should also thank the
algorithms are converging and/or stalling. If we focus our attention IMPIVA – Institute for the Small and Medium Valencian Enterprise,
on the proposed algorithms, we can see in Fig. 7 the corresponding for the project OSC with references IMIDIC/2008/137, IMIDIC/
interaction plot for GA1 and GA2. 2009/198 and IMIDIC/2010/175 and the Polytechnic University of
Valencia, for the project PPAR with reference 3147. Eva Vallada is
also partly funded by the Government of Comunitat Valenciana un-
7. Conclusions and future research der a grant (BEST 2009). The authors are also indebted with Dario
Diotallevi for his help in coding some of the re-implemented meth-
In this work we have proposed a genetic algorithm for the par- ods from the literature used in the tests.
allel machine scheduling problem with sequence dependent setup
times with the objective to minimise the makespan. The algorithm
includes a new crossover operator which includes a limited local References
search procedure. The application of a very fast local search based
Allahverdi, A., Ng, C., Cheng, T., Kovalyov, M., 2008. A survey of scheduling problems
on the insertion neighborhood and the acceptance of movements with setup times or costs. European Journal of Operational Research 187, 985–
during the local search are also innovative characteristics of the 1032.
622 E. Vallada, R. Ruiz / European Journal of Operational Research 211 (2011) 612–622
Anghinolfi, D., Paolucci, M., 2007. Parallel machine total tardiness scheduling with a Kim, D., Kim, K., Jang, W., Chen, F., 2002. Unrelated parallel machine scheduling
new hybrid metaheuristic approach. Computers & Operations Research 34, with setup times using simulated annealing. Robotics and Computer Integrated
3471–3490. Manufacturing 18, 223–231.
Armentano, V., Felizardo, M., 2007. Minimizing total tardiness in parallel machine Kim, D., Na, D., Chen, F., 2003. Unrelated parallel machine scheduling with setup
scheduling with setup times: An adaptive memory-based GRASP approach. times and a total weighted tardiness objective. Robotics and Computer
European Journal of Operational Research 183, 100–114. Integrated Manufacturing 19, 173–181.
Chen, J., 2005. Unrelated parallel machine scheduling with secondary resource Kurz, M., Askin, R., 2001. Heuristic scheduling of parallel machines with sequence-
constraints. International Journal of Advanced Manufacturing Technology 26, dependent set-up times. International Journal of Production Research 39, 3747–
285–292. 3769.
Chen, J., 2006. Minimization of maximum tardiness on unrelated parallel machines Lee, Y., Pinedo, M., 1997. Scheduling jobs on parallel machines with sequence-
with process restrictions and setups. International Journal of Advanced dependent setup times. European Journal of Operational Research 100, 464–
Manufacturing Technology 29, 557–563. 474.
Chen, J., Wu, T., 2006. Total tardiness minimization on unrelated parallel machine Logendran, R., McDonell, B., Smucker, B., 2007. Scheduling unrelated parallel
scheduling with auxiliary equipment constraints. OMEGA, The International machines with sequence-dependent setups. Computers & Operations Research
Journal of Management Science 34, 81–89. 34, 3420–3438.
Chen, Y.R., Li, X.P., Sawhney, R., 2009. Restricted job completion time variance Low, C., 2005. Simulated annealing heuristic for flow shop scheduling problems
minimisation on identical parallel machines. European Journal of Industrial with unrelated parallel machines. Computers & Operations Research 32, 2013–
Engineering 3 (3), 261–276. 2025.
Damodaran, P., Hirani, N.S., Velez-Gallego, M.C., 2009. Scheduling identical parallel Montgomery, D., 2007. Design and Analysis of Experiments, fifth ed. John Wiley &
batch processing machines to minimise makespan using genetic algorithms. Sons, New York.
European Journal of Industrial Engineering 3 (2), 187–206. Pfund, M., Fowler, J., Gadkari, A., Chen, Y., 2008. Scheduling jobs on parallel
Dunstall, S., Wirth, A., 2005. Heuristic methods for the identical parallel machine machines with setup times and ready times. Computers & Industrial
flowtime problem with set-up times. Computers & Operations Research 32, Engineering 54, 764–782.
2479–2491. Pinedo, M., 2008. Scheduling: Theory, Algorithms and Systems, third ed. Prentice
Eom, D., Shin, H., Kwun, I., Shim, J., Kim, S., 2002. Scheduling jobs on parallel Hall, New Jersey.
machines with sequence dependent family set-up times. International Journal Rabadi, G., Moraga, R., Salem, A., 2006. Heuristics for the unrelated parallel machine
of Advanced Manufacturing Technology 19, 926–932. scheduling problem with setup times. Journal of Intelligent Manufacturing 17,
Franca, P., Gendreau, M., Laporte, G., Mnller, F., 1996. A tabu search heuristic for the 85–97.
multiprocessor scheduling problem with sequence dependent setup times. Rocha de Paula, M., Gómez-Ravetti, M., Robson-Mateus, G., Pardalos, P., 2007.
International Journal of Production Economics 43, 79–89. Solving parallel machines scheduling problems with sequence-dependent setup
Gendreau, M., Laporte, G., Morais-Guimaraes, E., 2001. A divide and merge heuristic times using variable neighbourhood search. IMA, Journal of Management
for the multiprocessor scheduling problem with sequence dependent setup Mathematics 18, 101–115.
times. European Journal of Operational Research 133, 183–189. Ruiz, R., Allahverdi, A., 2007. No-wait flowshop with separate setup times to
Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization and Machine minimize maximum lateness. International Journal of Advanced Manufacturing
Learning. Addison-Wesley, Reading. Technology 35 (5–6), 551–565.
Guinet, A., 1993. Scheduling sequence-dependent jobs on identical parallel Ruiz, R., Maroto, C., Alcaraz, J., 2006. Two new robust genetic algorithms for the
machines to minimize completion time criteria. International Journal of flowshop scheduling problem. OMEGA, The International Journal of
Production Research 31, 1579–1594. Management Science 34, 461–476.
Holland, J.H., 1975. Adaptation in Natural and Artificial Systems. The University of Tahar, D., Yalaoui, F., Chu, C., Amodeo, L., 2006. A linear programming approach for
Michigan Press, Ann Arbor. identical parallel machine scheduling with job splitting and sequence-
Hurink, J., Knust, S., 2001. List scheduling in a parallel machine environment with dependent setup times. International Journal of Production Economics 99,
precedence constraints and setup times. Operations Research Letters 29, 231–239. 63–73.
Kim, C., Shin, H., 2003. Scheduling jobs on parallel machines: a restricted tabu Weng, M., Lu, J., Ren, H., 2001. Unrelated parallel machine scheduling with setup
search approach. International Journal of Advanced Manufacturing Technology consideration and a total weighted completion time objective. International
22, 278–287. Journal of Production Economics 70, 215–226.