An Efficient Memetic Algorithm For The Flexible Job Shop With Setup Times
An Efficient Memetic Algorithm For The Flexible Job Shop With Setup Times
An Efficient Memetic Algorithm For The Flexible Job Shop With Setup Times
91
this paper and give some ideas for future work. 4
T11M2 T12M1
0
4
2 Problem formulation 2
4+2 4+2
92
3 Neighborhood structure T11M2
4
T12M3
4+2
We propose here a neighborhood structure for the SDST- 0
4
FJSP termed N1SF . This structure considers two types of 2
4+2
moves. Firstly, reversals of single critical arcs, which are start
1
T21M1 3
T22M3 4
T23M2 2
T24M2 2
end
analogous to the moves of the structure N1S defined in (Vela,
Varela, and González 2010) for the FJSP. Furthermore, we 3+2 2+1
4+1 4
consider moves which have to do with machine assignment
to operations. Abusing notation, we use the notation N1SF = T31M1 3 T32M2 4 T33M1
N1S ∪ N1F to indicate that the new structure is the union of 3+2
93
obtain σ , if x = P Mv and y = SMw before the move, the The coding schema is based on the two-vector representa-
heads and tails for operations v and w in σ are estimated as tion, which is widely used in the flexible job-shop problem
follows: (see for example (Gao, Sun, and Gen 2008)). In this rep-
resentation, each chromosome has one vector with the task
rw = max {rP Jw + pP Jw , rx + px + sxw } sequence and another one with the machine assignment.
rv = max {rP Jv + pP Jv , rw
+ pw + swv } The task sequence vector is based on permutations with
qv = max {qSJv + pSJv , qy + py + svy } repetition, as proposed in (Bierwirth 1995) for the JSP.
It is a permutation of the set of operations, each be-
qw = max {qSJw + pSJw , qv + pv + swv } ing represented by its job number. For example, if we
have a problem with 3 jobs: J1 = {θ11 , θ12 }, J2 =
Given this, the makespan of σ can be estimated as the max-
{θ21 , θ22 , θ23 , θ24 }, J3 = {θ31 , θ32 , θ33 }, then the sequence
imum length of the longest paths from node start to node
(2 1 2 3 2 3 3 2 1) is a valid vector that represents the
end through nodes v and w, namely Est(Cmax (σ )) =
topological order {θ21 , θ11 , θ22 , θ31 , θ23 , θ32 , θ33 , θ24 , θ12 }.
max {rw + pw + q w , rv + pv + qv }. This procedure re-
With this encoding, every permutation produces a feasible
turns a lower bound of the makespan of σ .
processing order.
Regarding N1F , if we change the machine assignation of
The machine assignment vector has the machine number
an operation v, a fast estimation can be obtained calculating
that uses the task located in the same position in the task
the longest path through v in the new schedule σ . To this
sequence vector. For example, if we consider the sequence
end, we estimate the head and tail of v in σ as follows:
vector above, then the machine vector (1 2 3 1 2 2 1 2 1),
rv = max {rP Jv + pP Jv , rP Mv + pP Mv + sP Mv v } indicates that the tasks θ21 , θ31 , θ33 and θ12 use the machine
1, the tasks θ11 , θ23 , θ32 and θ24 use the machine 2, and only
qv = max {qSJv + pSJv , qSMv + pSMv + svSMv } the task θ22 uses the machine 3.
The makespan of σ can be estimated as Est(Cmax (σ )) = For chromosome mating the genetic algorithm uses an ex-
rv + pv + qv (notice that pv may change from σ to σ ). This tension of the Job Order Crossover (JOX) described in (Bier-
is also a lower bound of the makespan of σ . wirth 1995) for the classical JSP. Given two parents, JOX
In order to evaluate the accuracy of the estimates, we esti- selects a random subset of jobs and copies their genes to one
mated and evaluated the actual makespan of about 100 mil- offspring in the same positions as in the first parent, then the
lion neighbors for instances with different sizes. With regard remaining genes are taken from the second parent so that
to N1S , we observed that the estimate coincided with the ex- they maintain their relative ordering. For creating another
act value of the makespan in 88.9% of the neighbors. In the offspring the parents change their roles. In extending this
remaining 11.1% of the neighbors, the estimate was in av- operator to the flexible case, we need to consider also the
erage 2.30% lower than the actual makespan. For N1F , the machine assignment vector. We propose choosing for every
estimate coincided in 94.0% of the neighbors, and in the re- task the assignation it has in the parent it comes from. We
maining 6.0% the estimate was in average 3.16% lower than clarify how this extended JOX operator works by means of
the actual makespan. Therefore, these estimates are really an example. Let us consider the following two parents
efficient and appropriate. Parent 1 Sequence: (2 1 2 3 2 3 3 2 1)
Parent 1 Assignment: (1 2 3 1 2 2 1 2 1)
4 Memetic algorithm Parent 2 Sequence: (1 3 2 2 1 3 2 2 3)
Parent 2 Assignment: (3 2 3 1 3 2 1 3 3)
In this section, we describe the main characteristics of the If the selected subset of jobs just includes the job 2, then
memetic algorithm used. Firstly, the genetic algorithm and Offspring 1 Sequence: (2 1 2 3 2 1 3 2 3)
then the tabu search which is applied to every chromosome Offspring 1 Assignment: (1 3 3 2 2 3 2 2 3)
generated by the genetic algorithm. Offspring 2 Sequence: (1 3 2 2 3 3 2 2 1)
Offspring 2 Assignment: (2 1 3 1 2 1 1 3 1)
4.1 Genetic Algorithm The operator JOX might swap any two operations requir-
We use a conventional genetic algorithm where the initial ing the same machine; this is an implicit mutation effect. For
population is generated at random. Then, the algorithm iter- this reason, we have not used any explicit mutation opera-
ates over a number of steps or generations. In each iteration, tor. Therefore, parameter setting in the experimental study
a new generation is built from the previous one by applying is considerably simplified, as crossover probability is set to
selection, recombination and replacement operators. 1 and mutation probability need not be specified. With this
In the selection phase all chromosomes are grouped into setting, we have obtained results similar to those obtained
pairs, and then each one of these pairs is mated to obtain with a lower crossover probability and a low probability of
two offspring. Tabu search is applied to both offspring, and applying mutation operators. Some authors, for example in
finally the replacement is carried out as a tournament selec- (Essafi, Mati, and Dauzère-Pérès 2008) or (González et al.
tion from each pair of parents and their two offspring. Af- 2012) have already noticed that a mutation operator does not
ter tabu search, a chromosome is rebuilt from the improved play a relevant role in a memetic algorithm.
schedule obtained, so its characteristics can be transferred to To build schedules we have used a simple decoding algo-
subsequent offsprings. This effect of the evaluation function rithm: the operations are scheduled in exactly the same or-
is known as Lamarckian evolution. der as they appear in the chromosome sequence σ. In other
94
Table 1: Summary of results in the SDST-FJSP: SDST-HUdata benchmark
Instance Size Flex. LB IFS GA TS GA+TS T(s.)
la01 10 × 5 1.15 609 726 801 (817) 721(*) (724) 721(*) (724) 6
la02 10 × 5 1.15 655 749 847 (870) 737(*) (738) 737(*) (737) 7
la03 10 × 5 1.15 550 652 760 (789) 652 (652) 652 (652) 7
la04 10 × 5 1.15 568 673 770 (790) 673 (678) 673 (675) 9
la05 10 × 5 1.15 503 603 679 (685) 602(*) (602) 602(*) (602) 8
la06 15 × 5 1.15 833 950 1147 (1165) 956 (961) 953 (957) 12
la07 15 × 5 1.15 762 916 1123 (1150) 912(*) (917) 905(*) (911) 18
la08 15 × 5 1.15 845 948 1167 (1186) 940(*) (951) 940(*) (941) 15
la09 15 × 5 1.15 878 1002 1183 (1210) 1002 (1007) 989(*) (995) 22
la10 15 × 5 1.15 866 977 1127 (1156) 956(*) (960) 956(*) (956) 29
la11 20 × 5 1.15 1087 1256 1577 (1600) 1265 (1273) 1244(*) (1254) 33
la12 20 × 5 1.15 960 1082 1365 (1406) 1105 (1119) 1098 (1107) 26
la13 20 × 5 1.15 1053 1215 1473 (1513) 1210(*) (1223) 1205(*) (1212) 24
la14 20 × 5 1.15 1123 1285 1549 (1561) 1267(*) (1277) 1257(*) (1263) 27
la15 20 × 5 1.15 1111 1291 1649 (1718) 1284(*) (1297) 1275(*) (1282) 29
la16 10 × 10 1.15 892 1007 1256 (1269) 1007 (1007) 1007 (1007) 12
la17 10 × 10 1.15 707 858 1007 (1059) 851(*) (851) 851(*) (851) 12
la18 10 × 10 1.15 842 985 1146 (1184) 985 (988) 985 (992) 10
la19 10 × 10 1.15 796 956 1166 (1197) 951(*) (955) 951(*) (951) 16
la20 10 × 10 1.15 857 997 1194 (1228) 997 (997) 997 (997) 12
MRE 16.29 38.81 (42.27) 15.93 (16.49) 15.55 (15.92)
#best 7 0 12 18
Values in bold are best known solutions, (*) improves previous best known solution.
95
Table 2: Summary of results in the FJSP: DP Benchmark
Instance Size Flex. LB TS hGA CDDS GA+TS T(s.)
01a 10 × 5 1.13 2505 2518 (2528) 2518 (2518) 2518 (2525) 2505(*) (2511) 74
02a 10 × 5 1.69 2228 2231 (2234) 2231 (2231) 2231 (2235) 2232 (2234) 120
03a 10 × 5 2.56 2228 2229 (2230) 2229 (2229) 2229 (2232) 2229 (2230) 143
04a 10 × 5 1.13 2503 2503 (2516) 2515 (2518) 2503 (2510) 2503 (2504) 72
05a 10 × 5 1.69 2189 2216 (2220) 2217 (2218) 2216 (2218) 2219 (2221) 123
06a 10 × 5 2.56 2162 2203 (2206) 2196 (2198) 2196 (2203) 2200 (2204) 157
07a 15 × 8 1.24 2187 2283 (2298) 2307 (2310) 2283 (2296) 2266(*) (2286) 201
08a 15 × 8 2.42 2061 2069 (2071) 2073 (2076) 2069 (2069) 2072 (2075) 197
09a 15 × 8 4.03 2061 2066 (2067) 2066 (2067) 2066 (2067) 2066 (2067) 291
10a 15 × 8 1.24 2178 2291 (2306) 2315 (2315) 2291 (2303) 2267(*) (2273) 240
11a 15 × 8 2.42 2017 2063 (2066) 2071 (2072) 2063 (2072) 2068 (2071) 222
12a 15 × 8 4.03 1969 2034 (2038) 2030 (2031) 2031 (2034) 2037 (2041) 266
13a 20 × 10 1.34 2161 2260 (2266) 2257 (2260) 2257 (2260) 2271 (2276) 241
14a 20 × 10 2.99 2161 2167 (2168) 2167 (2168) 2167 (2179) 2169 (2171) 340
15a 20 × 10 5.02 2161 2167 (2167) 2165 (2165) 2165 (2170) 2166 (2166) 470
16a 20 × 10 1.34 2148 2255 (2259) 2256 (2258) 2256 (2258) 2266 (2271) 253
17a 20 × 10 2.99 2088 2141 (2144) 2140 (2142) 2140 (2146) 2147 (2150) 333
18a 20 × 10 5.02 2057 2137 (2140) 2127 (2131) 2127 (2132) 2138 (2141) 488
MRE 2.01 (2.24) 2.12 (2.19) 1.94 (2.19) 1.99 (2.17)
#best 9 10 13 6
Values in bold are best known solutions, (*) improves previous best known solution.
mark used in that paper, which is denoted SDST-HUdata. It in bold the best known solutions, and we mark with a “(*)”
consists of 20 instances derived from the first 20 instances of when a method improves the previous best known solution.
the data subset of the FJSP benchmark proposed in (Hurink, We notice that GA alone obtains very poor results, with a
Jurisch, and Thole 1994). Each instance was created by MRE of 38.81% for the best solutions. TS alone performs
adding to the original instance one setup time matrix str for much better than GA, with a MRE of 15.93% for the best
each machine r. The same setup time matrix was added for solutions, and was able to reach the best known solution in
each machine in all benchmark instances. Each matrix has 12 of the 20 instances. However, the hybridized approach
size n×n, and the value strij indicates the setup time needed obtains even better results. In fact, GA+TS obtains a better
to reconfigure the machine r when switches from job i to job average makespan than TS in 13 instances, the same in 6
j. These setup times are sequence dependent and they fulfill instances and a worse average in only 1 instance. This shows
the triangle inequality. the good synergy between the two metaheuristics.
IFS is implemented in Java and run on a AMD Phenom II Overall, compared to IFS, GA+TS establishes new best
X4 Quad 3.5 Ghz under Linux Ubuntu 10.4.1, with a max- solutions for 13 instances, reaches the same best known so-
imum CPU time limit set to 800 seconds for all runs. We lution for 5 instances and for the instances la06 and la12 the
are considering here the best makespan reported in (Oddi et solution reached is worse than the current best known solu-
al. 2011) for each instance, regardless of the configuration tion. Regarding the average makespan, it is better than the
used. best solution obtained by IFS in 13 instances, it is the equal
Table 1 shows the results of the experiments in the SDST- in 3 instances and it is worse in 4 instances. IFS achieved a
HUdata benchmark. In particular we indicate for each in- MRE of 16.29%, while GA+TS achieved a MRE of 15.55%
stance the name, the size (n × m), the flexibility (i.e. the for the best solutions and 15.92% for the average values.
average number of available machines per operation) and Additionally, the CPU time of GA+TS (between 6 and 33
a lower bound LB. The lower bounds are those reported seconds per run depending on the instance) is lower than
in (Mastrolilli and Gambardella 2000) for the original in- that of IFS (800 seconds per run). However CPU times are
stances without setups, therefore they are probably far from not directly comparable due to the differences in program-
the optimal solutions. For IFS we indicate the best results ming languages, operating systems and target machines. In
reported in (Oddi et al. 2011), and for GA, TS and GA+TS conclusion, GA+TS is better than GA and TS alone, and it
we indicate the best and average makespan in 10 runs for is quite competitive with IFS.
each instance. We also show the runtime in seconds of a sin-
gle run of our algorithms. Additionally, we report the MRE 5.2 Comparison with the state-of-the-art in the
(Mean Relative Error) for each method, calculated as fol-
FJSP
lows: M RE = (Cmax − LB)/LB × 100. Finally, in the
bottom line we indicate the number of instances for which In the FJSP it is easier to compare with the state-of-the-
a method obtains the best known solution (#best). We mark art, because of the number of existing works. We con-
96
Table 3: Summary of results in the FJSP: BC Benchmark
Instance Size Flex. LB TS hGA CDDS GA+TS T(s.)
mt10c1 10 × 11 1.10 655 928 (928) 927 (927) 928 (929) 927 (927) 14
mt10cc 10 × 12 1.20 655 910 (910) 910 (910) 910 (911) 908(*) (909) 14
mt10x 10 × 11 1.10 655 918 (918) 918 (918) 918 (918) 918 (922) 18
mt10xx 10 × 12 1.20 655 918 (918) 918 (918) 918 (918) 918 (918) 16
mt10xxx 10 × 13 1.30 655 918 (918) 918 (918) 918 (918) 918 (918) 19
mt10xy 10 × 12 1.20 655 906 (906) 905 (905) 906 (906) 905 (905) 16
mt10xyz 10 × 13 1.30 655 847 (850) 849 (849) 849 (851) 849 (850) 21
setb4c9 15 × 11 1.10 857 919 (919) 914 (914) 919 (919) 914 (914) 22
setb4cc 15 × 12 1.20 857 909 (912) 914 (914) 909 (911) 907(*) (907) 22
setb4x 15 × 11 1.10 846 925 (925) 925 (931) 925 (925) 925 (925) 18
setb4xx 15 × 12 1.20 846 925 (926) 925 (925) 925 (925) 925 (925) 19
setb4xxx 15 × 13 1.30 846 925 (925) 925 (925) 925 (925) 925 (925) 20
setb4xy 15 × 12 1.20 845 916 (916) 916 (916) 916 (916) 910(*) (910) 25
setb4xyz 15 × 13 1.30 838 905 (908) 905 (905) 905 (907) 905 (905) 19
seti5c12 15 × 16 1.07 1027 1174 (1174) 1175 (1175) 1174 (1175) 1171(*) (1173) 41
seti5cc 15 × 17 1.13 955 1136 (1136) 1138 (1138) 1136 (1137) 1136 (1137) 40
seti5x 15 × 16 1.07 955 1201 (1204) 1204 (1204) 1201 (1202) 1199(*) (1200) 43
seti5xx 15 × 17 1.13 955 1199 (1201) 1202 (1203) 1199 (1199) 1197(*) (1198) 38
seti5xxx 15 × 18 1.20 955 1197 (1198) 1204 (1204) 1197 (1198) 1197 (1197) 40
seti5xy 15 × 17 1.13 955 1136 (1136) 1136 (1137) 1136 (1138) 1136 (1137) 39
seti5xyz 15 × 18 1.20 955 1125 (1127) 1126 (1126) 1125 (1125) 1127 (1128) 41
MRE 22.53 (22.63) 22.61 (22.66) 22.54 (22.60) 22.42 (22.49)
#best 12 11 11 19
Values in bold are best known solutions, (*) improves previous best known solution.
sider several sets of problem instances: the DP benchmark pose, therefore we report the best and average solutions for
proposed in (Dauzère-Pérès and Paulli 1997) with 18 in- the method. The authors set the maximum CPU time to 15
stances, the BC benchmark proposed in (Barnes and Cham- seconds for all test instances except for DP benchmark, in
bers 1996) with 21 instances, and the BR benchmark pro- which the maximum CPU time is set to 200 seconds.
posed in (Brandimarte 1993) with 10 instances. GA+TS was run 10 times for each instance. The run time
We are comparing GA+TS with the tabu search (TS) of of the algorithm is in direct ratio with the size and flexibil-
(Mastrolilli and Gambardella 2000), the hybrid genetic algo- ity of the instance. The CPU times range from 72 to 488
rithm (hGA) of (Gao, Sun, and Gen 2008) and the climbing seconds in the DP benchmark, from 14 to 43 seconds in the
depth-bounded discrepancy search (CDDS) of (Hmida et al. BC benchmark, and from 6 to 112 seconds in the BR bench-
2010). These three methods are, as far as we know, the best mark. Therefore, the run times are similar to that of the other
existing approaches. methods, although they are not directly comparable due to
TS was coded in C++ on a 266 MHz Pentium. They exe- the differences in languages and target machines. However,
cute 5 runs per instance and they limit the maximum number we have seen that in easy instances the best solution is found
of iterations between 100000 and 500000 depending on the in the very first generations, therefore, for many of these in-
instance. With this configuration they report run times be- stances GA+TS requires a much smaller CPU time to reach
tween 28 and 150 seconds in the DP benchmark, between 1 the same solution.
and 24 seconds in the BC benchmark, and between 0.01 and Tables 2, 3 and 4 show the results of the experiments in the
8 seconds in the BR benchmark. DP benchmark, BC benchmark and BR benchmark, respec-
hGA was implemented in Delphi on a 3.0 GHz Pentium. tively. As we did for Table 1, we indicate for each instance
Depending on the complexity of the problems, the popula- the name, size, flexibility, and the lower bound reported in
tion size of hGA ranges from 300 to 3000, and the number of (Mastrolilli and Gambardella 2000). Then, for each method
generations is limited to 200. They also execute 5 runs per we report the best and average makespan. We also indicate
instance, and with the described configuration they report the runtime of a single run of GA+TS. And finally, the MRE
run times between 96 and 670 seconds in the DP bench- for each method and the number of instances for which a
mark, between 10 and 72 seconds in the BC benchmark, and method reaches the best known solution.
between 1 and 20 seconds in the BR benchmark. In the DP benchmark GA+TS improves the previous best
CDDS was coded in C on an Intel Core 2 Duo 2.9 GHz known solution in 3 of the 18 instances (01a 07a 10a). It is
PC with 2GB of RAM. It is a deterministic algorithm, how- remarkable that we prove the optimality of the solution 2505
ever in the paper are reported the results of 4 runs per in- for instance 01a, as this is also its lower bound. Moreover,
stance, one for each of the neighborhood structures they pro- the MRE of the average makespan of GA+TS is the best of
97
Table 4: Summary of results in the FJSP: BR Benchmark
Instance Size Flex. LB TS hGA CDDS GA+TS T(s.)
Mk01 10 × 6 2.09 36 40 (40) 40 (40) 40 (40) 40 (40) 6
Mk02 10 × 6 4.10 24 26 (26) 26 (26) 26 (26) 26 (26) 13
Mk03 15 × 8 3.01 204 204 (204) 204 (204) 204 (204) 204 (204) 9
Mk04 15 × 8 1.91 48 60 (60) 60 (60) 60 (60) 60 (60) 20
Mk05 15 × 4 1.71 168 173 (173) 172 (172) 173 (174) 172 (172) 17
Mk06 10 × 15 3.27 33 58 (58) 58 (58) 58 (59) 58 (58) 54
Mk07 20 × 5 2.83 133 144 (147) 139 (139) 139 (139) 139 (139) 40
Mk08 20 × 10 1.43 523 523 (523) 523 (523) 523 (523) 523 (523) 14
Mk09 20 × 10 2.53 299 307 (307) 307 (307) 307 (307) 307 (307) 26
Mk10 20 × 15 2.98 165 198 (199) 197 (197) 197 (198) 199 (200) 112
MRE 15.41 (15.83) 14.92 (14.92) 14.98 (15.36) 15.04 (15.13)
#best 7 10 9 9
Values in bold are best known solutions.
the four algorithms considered (2.17% versus 2.19%, 2.19% jective is to minimize the makespan. We have proposed
and 2.24%). The MRE of the best makespan is the second a neighborhood structure termed N1SF which is the union
best of the four algorithms, although in this case we have to of two structures: N1S , designed to modify the sequenc-
be aware that our algorithm has some advantage since we ing of the tasks, and N1F , designed to modify the machine
launched it more times for each instance. assignations. This structure has then been used in a tabu
In the BC benchmark we are able to improve the previ- search algorithm, which is embedded in a genetic algorithm
ous best known solution in 6 of the 21 instances (mt10cc, framework. We have defined methods for estimating the
setb4cc, setb4xy, seti5c12, seti5x and seti5xx). Addition- makespan of both neighborhoods and empirically shown
ally, the MRE of both the best and average makespan of that they are very accurate. In the experimental study we
GA+TS is the best of the four algorithms considered. In par- have shown that the hybridized approach (GA+TS) is better
ticular, considering the best makespan our MRE is 22.42% than each method alone. We have also compared our ap-
(versus 22.53%, 22.54% and 22.61%), and considering the proach against state-of-the-art algorithms, both in FJSP and
average makespan our MRE is 22.49% (versus 22.60%, SDST-FJSP benchmarks. In the SDST-FJSP we have com-
22.63% and 22.66%). Moreover, the number of instances pared GA+TS with the algorithm proposed in (Oddi et al.
for which we obtain the best known solution is the best of 2011) across the 20 instances described in that paper. For the
the four methods: 19 of the 21 instances. FJSP we have considered three sets of instances, comparing
In the BR benchmark we obtain the best known solution GA+TS with the methods proposed in (Gao, Sun, and Gen
in 9 of the 10 instances; only hGA is able to improve that 2008), (Mastrolilli and Gambardella 2000) and (Hmida et al.
number. Regarding the MRE, GA+TS is second best con- 2010). Our proposal compares favorably to state-of-the-art
sidering the average makespan and third best considering the methods in both problems considered, and we are able to im-
best makespan. We can conclude that BR benchmark is gen- prove the best known solution in 9 of the 49 FJSP instances
erally easy, as for most instances all four methods reached and in 13 of the 20 SDST-FJSP instances.
the best known solution in every run. In many of these in- As future work we plan to experiment across the FJSP
stances GA+TS reached the best known solution in the first benchmark proposed in (Hurink, Jurisch, and Thole 1994),
generation of each run. and also across the SDST-FJSP benchmark proposed in
Overall, GA+TS shows a lower efficiency in the largest (Saidi-Mehrabad and Fattahi 2007). We also plan to de-
and most flexible instances, compared to the other algo- fine a new SDST-FJSP benchmark with bigger instances and
rithms. In our opinion this is due to the fact that GA+TS with more flexibility than the ones proposed in (Oddi et al.
needs more run time to improve the obtained solution in 2011). We shall also consider different crossover operators
these instances. Additionally, regarding run times in the and other metaheuristics like scatter search with path relink-
FJSP benchmarks we have to notice that our algorithm is ing. Finally, we plan to extend our approach to other variants
at disadvantage, because it does all the necessary setup cal- of scheduling problems which are even closer to real envi-
culations even when the setup times are zero, as it occurs in ronments, for instance, problems with uncertain durations,
these instances. or problems considering alternative objective functions such
In summary, we can conclude that GA+TS is competitive as weighted tardiness.
with the state-of-the-art and was able to obtain new upper
bounds for 9 of the 49 instances considered. Acknowledgements
We would like to thank Angelo Oddi and Riccardo Rasconi
6 Conclusions for providing us with the SDST-FJSP instances. All authors
We have considered the flexible job shop scheduling prob- are supported by Grant MEC-FEDER TIN2010-20976-C02-
lem with sequence-dependent setup times, where the ob- 02.
98
References Nowicki, E., and Smutnicki, C. 2005. An advanced tabu
search algorithm for the job shop problem. Journal of
Barnes, J., and Chambers, J. 1996. Flexible job shop
Scheduling 8:145–159.
scheduling by tabu search. Technical Report Series: ORP96-
09, Graduate program in operations research and industrial Oddi, A.; Rasconi, R.; Cesta, A.; and Smith, S. 2011. Ap-
engineering. The University of Texas at Austin. plying iterative flattening search to the job shop scheduling
problem with alternative resources and sequence dependent
Bierwirth, C. 1995. A generalized permutation approach to setup times. In COPLAS 2011 Proceedings of the Work-
jobshop scheduling with genetic algorithms. OR Spectrum shop on Constraint Satisfaction Techniques for Planning
17:87–92. and Scheduling Problems.
Brandimarte, P. 1993. Routing and scheduling in a flexible Saidi-Mehrabad, M., and Fattahi, P. 2007. Flexible job shop
job shop by tabu search. Annals of Operations Research scheduling with tabu search algorithms. Int J Adv Manuf
41:157–183. Technol 32:563–570.
Brucker, P., and Thiele, O. 1996. A branch and bound Taillard, E. 1993. Parallel taboo search techniques for the
method for the general-job shop problem with sequence- job shop scheduling problem. ORSA Journal on Computing
dependent setup times. Operations Research Spektrum 6:108–117.
18:145–161.
Van Laarhoven, P.; Aarts, E.; and Lenstra, K. 1992. Job shop
Dauzère-Pérès, S., and Paulli, J. 1997. An integrated ap- scheduling by simulated annealing. Operations Research
proach for modeling and solving the general multiprocessor 40:113–125.
job-shop scheduling problem using tabu search. Annals of
Vela, C. R.; Varela, R.; and González, M. A. 2010. Lo-
Operations Research 70(3):281–306.
cal search and genetic algorithm for the job shop scheduling
Dell’ Amico, M., and Trubian, M. 1993. Applying tabu problem with sequence dependent setup times. Journal of
search to the job-shop scheduling problem. Annals of Oper- Heuristics 16:139–165.
ational Research 41:231–252.
Essafi, I.; Mati, Y.; and Dauzère-Pérès, S. 2008. A genetic
local search algorithm for minimizing total weighted tardi-
ness in the job-shop scheduling problem. Computers and
Operations Research 35:2599–2616.
Gao, J.; Sun, L.; and Gen, M. 2008. A hybrid genetic
and variable neighborhood descent algorithm for flexible job
shop scheduling problems. Computers and Operations Re-
search 35:2892–2907.
Glover, F. 1989a. Tabu search–part I. ORSA Journal on
Computing 1(3):190–206.
Glover, F. 1989b. Tabu search–part II. ORSA Journal on
Computing 2(1):4–32.
González, M. A.; González-Rodrı́guez, I.; Vela, C.; and
Varela, R. 2012. An efficient hybrid evolutionary algorithm
for scheduling with setup times and weighted tardiness min-
imization. Soft Computing 16(12):2097–2113.
González, M. A.; Vela, C.; and Varela, R. 2012. A com-
petent memetic algorithm for complex scheduling. Natural
Computing 11:151–160.
Hmida, A.; Haouari, M.; Huguet, M.; and Lopez, P. 2010.
Discrepancy search for the flexible job shop scheduling
problem. Computers and Operations Research 37:2192–
2201.
Hurink, E.; Jurisch, B.; and Thole, M. 1994. Tabu search
for the job shop scheduling problem with multi-purpose ma-
chine. Operations Research Spektrum 15:205–215.
Mastrolilli, M., and Gambardella, L. 2000. Effective neigh-
borhood functions for the flexible job shop problem. Journal
of Scheduling 3(1):3–20.
Mattfeld, D. 1995. Evolutionary Search and the Job
Shop: Investigations on Genetic Algorithms for Production
Scheduling. Springer-Verlag.
99