An Efficient Memetic Algorithm For The Flexible Job Shop With Setup Times

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Proceedings of the Twenty-Third International Conference on Automated Planning and Scheduling

An Efficient Memetic Algorithm for the


Flexible Job Shop with Setup Times

Miguel A. González and Camino R. Vela and Ramiro Varela


Artificial Intelligence Center, University of Oviedo, Spain,
Campus of Viesques, 33271 Gijón
email {mig,crvela,ramiro}@uniovi.es

Abstract Incorporating sequence-dependent setup times or a flexi-


ble shop environment changes the nature of scheduling prob-
This paper addresses the flexible job shop scheduling lems, so these well-known results and techniques for the JSP
problem with sequence-dependent setup times (SDST- are not directly applicable. There are several papers dealing
FJSP). This is an extension of the classical job shop
scheduling problem with many applications in real pro-
with each of the two factors (setups and flexibility). For
duction environments. We propose an effective neigh- example, the job shop with setups is studied in (Brucker
borhood structure for the problem, including feasibil- and Thiele 1996), where the authors developed a branch and
ity and non improving conditions, as well as procedures bound algorithm, and in (Vela, Varela, and González 2010)
for fast neighbor estimation. This neighborhood is em- and (González, Vela, and Varela 2012) where the authors
bedded into a genetic algorithm hybridized with tabu took some ideas proposed in (Van Laarhoven, Aarts, and
search. We conducted an experimental study to com- Lenstra 1992) as a basis for new neighborhood structures.
pare the proposed algorithm with the state-of-the-art in The flexible job shop has been widely studied. Based on
the SDST-FJSP and also in the standard FJSP. In this the observation that FJSP turns into the classical job shop
study, our algorithm has obtained better results than
those from other methods. Moreover, it has established
scheduling problem when a machine assignment is chosen
new upper bounds for a number of instances. for all the tasks, early literature proposed hierarchical strate-
gies for this complex scheduling problem, where machine
assignment and sequencing are studied separately, see for
1 Introduction example (Brandimarte 1993). However, more recent inte-
grated approaches such as the tabu search algorithm pro-
The Job Shop Scheduling Problem (JSP) is a simple model posed in (Mastrolilli and Gambardella 2000), the hybrid ge-
of many real production processes. However, in many en- netic algorithm proposed in (Gao, Sun, and Gen 2008) or
vironments the production model has to consider additional the discrepancy search approach proposed in (Hmida et al.
characteristics or complex constraints. For example, in auto- 2010), usually obtain better results.
mobile, printing, semiconductor, chemical or pharmaceuti-
However, very few papers have considered both flexibil-
cal industries, setup operations such as cleaning up or chang-
ity and setup times at the same time. Among these, (Saidi-
ing tools are required between two consecutive jobs on the
Mehrabad and Fattahi 2007), where the authors solve the
same machine. These setup operations depend on both the
problem with a tabu search algorithm or (Oddi et al. 2011),
outgoing and incoming jobs, so they cannot be considered
where the problem is solved by means of iterative flattening
as being part of any of these jobs. Also, the possibility of
search, deserve special mention.
selecting alternative routes among the machines is useful in
production environments where multiple machines are able In this paper, we propose a new neighborhood structure
to perform the same operation (possibly with different pro- for the SDST-FJSP. We also define feasibility and non im-
cessing times), as it allows the system to absorb changes in proving conditions, as well as algorithms for fast estimation
the demand of work or in the performance of the machines. of neighbors’ quality. This neighborhood is then exploited in
When these two factors are considered simultaneously, the a tabu search algorithm, which is embedded in a genetic al-
problem is known as the flexible job shop scheduling prob- gorithm. We conducted an experimental study in which we
lem with sequence-dependent setup times (SDST-FJSP). first analyzed the synergy between the two metaheuristics,
The classical JSP with makespan minimization has been and then we compared our algorithm with the state-of-the-
intensely studied, and many results have been established art in both the SDST-FJSP and the standard FJSP.
that have given rise to very efficient algorithms, as for ex- The remainder of the paper is organized as follows. In
ample the tabu search proposed in (Nowicki and Smutnicki Section 2 we formulate the problem and describe the so-
2005). lution graph model. In Section 3 we define the proposed
neighborhood. Section 4 details the metaheuristics used. In
Copyright  c 2013, Association for the Advancement of Artificial Section 5 we report the results of the experimental study,
Intelligence (www.aaai.org). All rights reserved. and finally Section 6 summarizes the main conclusions of

91
this paper and give some ideas for future work. 4
T11M2 T12M1
0
4
2 Problem formulation 2
4+2 4+2

In the job shop scheduling problem (JSP), we are given a set 1 3 4 2 2


start T21M1 T22M3 T23M2 T24M2 end
of n jobs, J = {J1 , . . . , Jn }, which have to be processed
on a set of m machines or resources, M = {M1 , . . . , Mm }. 3+2 2+1
4+1
Each job Jj consists of a sequence of nj operations (θj1 ,
θj2 , . . . , θjnj ), where θij must be processed without inter- T31M1 3 T32M2 4 T33M1
ruption on machine mij ∈ M during pij ∈ N time units. 3+2

The operations of a job must be processed one after another


in the given order (the job sequence) and each machine can
process at most one operation at a time. Figure 1: A feasible schedule to a problem with 3 jobs and
We also consider the addition of sequence-dependent 3 machines represented by a solution graph. Bold-face arcs
setup times. Therefore, if we have two operations u and v show a critical path whose length, i.e., the makespan, is 25.
(we use this notation to simplify expressions), a setup time
suv is needed to adjust the machine when v is processed
right after u. These setup times depend on both the out- orders and disjunctive arcs representing machine processing
going and incoming operations. We also define an initial orders. Each disjunctive arc (v, w) is weighted with pv +svw
setup time of the machine required by v, denoted s0v , re- and each conjunctive arc (u, w) is weighted with pu . If w is
quired when this is the first operation on that machine. We the first operation in the machine processing order, there is
consider that the setup times verify the triangle inequality, an arc (start, w) in Gσ with weight s0w and if w is the last
i.e., suv + svw ≥ suw holds for any operations u, v and operation, in both the job and machine sequences, there is
w requiring the same machine, as it usually happens in real an arc (w, end) with weight pw . Figure 1 shows a solution
scenarios. graph.
Furthermore, to add flexibility to the problem, an op- The makespan of the schedule is the cost of a critical
eration θij can be executed by one machine out of a set path in Gσ , i.e., a directed path from node start to node
Mij ⊆ M of given machines. The processing time for op- end having maximum cost. Bold-face arcs in Figure 1 rep-
eration θij on machine k ∈ Mij is pijk ∈ N. Notice that resent a critical path. Nodes and arcs in a critical path
the processing time of an operation may be different in each are also termed critical. We define a critical block as a
machine and that a machine may process several operations maximal subsequence of consecutive operations in a crit-
of the same job. To simplify notation, in the remaining of ical path requiring the same machine such that two con-
the paper pv denote the processing time of an operation v in secutive operations of the block do not belong to the same
its current machine assignment. job. As we will see in Section 3, this point is important
Let Ω denote the set of operations. A solution may be as the order of two operations in the same job cannot be
viewed as a feasible assignment of machines to operations reversed. Most neighborhood structures proposed for job
and a processing order of the operations on the machines shop problems rely on exchanging the processing order of
(i.e., a machine sequence). So, given a machine assignment, operations in critical blocks (Dell’ Amico and Trubian 1993;
a solution may be represented by a total ordering of the op- Van Laarhoven, Aarts, and Lenstra 1992).
erations, σ, compatible with the jobs and the machines se- To formalize the description of the neighborhood struc-
quences. For an operation v ∈ Ω let P Jv and SJv denote tures, we introduce the concepts of head and tail of an oper-
the operations just before and after v in the job sequence ation v, denoted rv and qv respectively. Heads and tails are
and P Mv and SMv the operations right before and after calculated as follows:
v in the machine sequence in a solution σ. The starting
and completion times of v, denoted tv and cv respectively, rstart = qend = 0
can be calculated as tv = max(cP Jv , cP Mv + sP Mv v ) and rv = max(rP Jv + pP Jv , rP Mv + pP Mv + sP Mv v )
cv = tv + pv . The objective is to find a solution σ that min- rend = max {rv + pv }
imizes the makespan, i.e., the completion time of the last v∈P Jend ∩P Mend
operation, denoted as Cmax (σ) = maxv∈Ω cv .

2.1 Solution graph qv = max(qSJv + pSJv , qSMv + pSMv + svSMv )


qstart = max {qv + pv + s0v }
We define a solution graph model which is adapted from v∈SMstart
(Vela, Varela, and González 2010) to deal with machine flex-
ibility. In accordance with this model, a feasible operation Here, we abuse notation slightly, so SMstart (resp.
processing order σ can be represented by an acyclic directed P Mend ) denotes the set formed by the first (resp. last) op-
graph Gσ where each node represents either an operation eration processed in each of the m machines and P Jend
of the problem or one of the dummy nodes start and end, denotes the set formed by the last operation processed in
which are fictitious operations with processing time 0. In each of the n jobs. A node v is critical if and only if
Gσ , there are conjunctive arcs representing job processing Cmax = rv + pv + qv .

92
3 Neighborhood structure T11M2
4
T12M3
4+2
We propose here a neighborhood structure for the SDST- 0
4
FJSP termed N1SF . This structure considers two types of 2
4+2
moves. Firstly, reversals of single critical arcs, which are start
1
T21M1 3
T22M3 4
T23M2 2
T24M2 2
end
analogous to the moves of the structure N1S defined in (Vela,
Varela, and González 2010) for the FJSP. Furthermore, we 3+2 2+1
4+1 4
consider moves which have to do with machine assignment
to operations. Abusing notation, we use the notation N1SF = T31M1 3 T32M2 4 T33M1
N1S ∪ N1F to indicate that the new structure is the union of 3+2

these two subsets of moves.

3.1 N1S structure Figure 2: A neighbor of the schedule of Figure 1 created


with N1F . The makespan is reduced from 25 to 19.
For the sequencing subproblem, we consider single moves,
i.e., reversing the processing order of two consecutive oper-
ations. We use a filtering mechanism based on the results (v, w) then we have to reconstruct σ starting from the po-
below, which allow the algorithm to discard unfeasible and sition of operation v and finishing in the position of w (a
a number of non-improving neighbors. By doing this, we similar reconstruction is detailed in (Mattfeld 1995) for the
get a neighborhood of reasonable size while augmenting the classic JSP).
chance of obtaining improving neighbors.
The next result establishes a sufficient condition for non- 3.2 N1F structure
improvement when a single arc is reversed in a solution. Changing the machine of one operation may give rise to an
Proposition 1 Let σ be a schedule and (v, w) a disjunctive improved schedule. This possibility was already exploited in
arc which is not in a critical block. Then, if the setup times (Mastrolilli and Gambardella 2000) where the authors con-
fulfill the triangular inequality, reversing the arc (v, w) does sider a set of moves, called k-insertions, in which an opera-
not produce any improvement even if the resulting schedule tion v is assigned to a new machine k ∈ Mv and then they
σ  is feasible. look for the optimal position for v in the machine sequence
of k. However, that procedure is time consuming, so we
Then, as we consider that the setup times fulfill the tri- propose here a simpler approach which works as follows.
angular inequality, we will only consider reversing critical Let σ be a total ordering of the operations in a schedule
arcs in order to obtain improving schedules. However, re- and let v be an operation in a critical path of Gσ , an element
versing some of the critical arcs cannot produce improving of N1F is obtained by assigning a new machine k ∈ Mv to
schedules as it is established in the following result. the operation v and then selecting a position for v in the ma-
Proposition 2 Let σ be a schedule and (v, w) an arc inside chine sequence of k so that the topological order given by
a critical block B, i.e., P Mv and SMw belong to B. Even if σ remains identical after the move. Clearly, this neighbor is
the schedule σ  obtained from σ by reversing the arc (v, w) feasible and the method is very efficient and not time con-
is feasible, σ  does not improve σ if the following condition suming. The number of neighbors depends of the number of
holds machines suitable for each operation on the critical path.
As an example, notice that M3 in Figure 1 only processes
Sxw + Swv + Svy ≥ Sxv + Svw + Swy . (1) the operation θ22 . If M3 can process θ12 , N1F switches the
where x = P Mv and y = SMw in schedule σ. machine assignation of the critical task θ12 from M1 to M3
to create a new neighbor. As θ12 is after θ22 in the topo-
Regarding feasibility, the following result guarantees that logical order, then N1F inserts θ12 after θ22 in the new ma-
the resulting schedule after reversing the arc (v, w) is feasi- chine sequence of M3 . The result is the schedule of Figure 2,
ble. whose makespan is better than that of the original schedule.
Proposition 3 Let σ be a schedule and (v, w) an arc in a
critical block. A sufficient condition for an alternative path 3.3 Makespan estimate
between v and w not to exist is that Computing the actual makespan of a neighbor is computa-
tionally expensive, since it requires recalculating the head of
rP Jw < rSJv + pSJv + C (2) all operations after v and the tail of all operations before w,
when the arc (v, w) is reversed. On the other hand, it re-
whre C = szP Jw if SMz = P Jw , and C =
quires recalculating the head of all operations after v and the
min{pSJz , szSMz + pSMz } otherwise, being z = SJv .
tail of all operations before v when v is assigned to a differ-
Therefore, in N1S , we only consider reversing arcs (v, w) ent machine. So, as it is usual, we propose using estimation
in a critical block which fulfill the condition (2), provided procedures instead.
that condition (1) does not hold. For N1S , we borrow the estimation used in (Vela, Varela,
Notice that, when one of the neighbors is finally selected, and González 2010), which is based on the lpath procedure
we need to reconstruct the total ordering σ of the opera- for the classical JSP from (Taillard 1993). To calculate this
tions. If the neighbor was created by N1S reversing an arc estimate, after reversing the arc (v, w) in a schedule σ to

93
obtain σ  , if x = P Mv and y = SMw before the move, the The coding schema is based on the two-vector representa-
heads and tails for operations v and w in σ  are estimated as tion, which is widely used in the flexible job-shop problem
follows: (see for example (Gao, Sun, and Gen 2008)). In this rep-
 resentation, each chromosome has one vector with the task
rw = max {rP Jw + pP Jw , rx + px + sxw } sequence and another one with the machine assignment.
rv = max {rP Jv + pP Jv , rw

+ pw + swv } The task sequence vector is based on permutations with
qv = max {qSJv + pSJv , qy + py + svy } repetition, as proposed in (Bierwirth 1995) for the JSP.
It is a permutation of the set of operations, each be-

qw = max {qSJw + pSJw , qv + pv + swv } ing represented by its job number. For example, if we
have a problem with 3 jobs: J1 = {θ11 , θ12 }, J2 =
Given this, the makespan of σ  can be estimated as the max-
{θ21 , θ22 , θ23 , θ24 }, J3 = {θ31 , θ32 , θ33 }, then the sequence
imum length of the longest paths from node start to node
(2 1 2 3 2 3 3 2 1) is a valid vector that represents the
end through nodes v and w, namely Est(Cmax (σ  )) =
  topological order {θ21 , θ11 , θ22 , θ31 , θ23 , θ32 , θ33 , θ24 , θ12 }.
max {rw + pw + q w , rv + pv + qv }. This procedure re-
With this encoding, every permutation produces a feasible
turns a lower bound of the makespan of σ  .
processing order.
Regarding N1F , if we change the machine assignation of
The machine assignment vector has the machine number
an operation v, a fast estimation can be obtained calculating
that uses the task located in the same position in the task
the longest path through v in the new schedule σ  . To this
sequence vector. For example, if we consider the sequence
end, we estimate the head and tail of v in σ  as follows:
vector above, then the machine vector (1 2 3 1 2 2 1 2 1),
rv = max {rP Jv + pP Jv , rP Mv + pP Mv + sP Mv v } indicates that the tasks θ21 , θ31 , θ33 and θ12 use the machine
1, the tasks θ11 , θ23 , θ32 and θ24 use the machine 2, and only
qv = max {qSJv + pSJv , qSMv + pSMv + svSMv } the task θ22 uses the machine 3.
The makespan of σ  can be estimated as Est(Cmax (σ  )) = For chromosome mating the genetic algorithm uses an ex-
rv + pv + qv (notice that pv may change from σ to σ  ). This tension of the Job Order Crossover (JOX) described in (Bier-
is also a lower bound of the makespan of σ  . wirth 1995) for the classical JSP. Given two parents, JOX
In order to evaluate the accuracy of the estimates, we esti- selects a random subset of jobs and copies their genes to one
mated and evaluated the actual makespan of about 100 mil- offspring in the same positions as in the first parent, then the
lion neighbors for instances with different sizes. With regard remaining genes are taken from the second parent so that
to N1S , we observed that the estimate coincided with the ex- they maintain their relative ordering. For creating another
act value of the makespan in 88.9% of the neighbors. In the offspring the parents change their roles. In extending this
remaining 11.1% of the neighbors, the estimate was in av- operator to the flexible case, we need to consider also the
erage 2.30% lower than the actual makespan. For N1F , the machine assignment vector. We propose choosing for every
estimate coincided in 94.0% of the neighbors, and in the re- task the assignation it has in the parent it comes from. We
maining 6.0% the estimate was in average 3.16% lower than clarify how this extended JOX operator works by means of
the actual makespan. Therefore, these estimates are really an example. Let us consider the following two parents
efficient and appropriate. Parent 1 Sequence: (2 1 2 3 2 3 3 2 1)
Parent 1 Assignment: (1 2 3 1 2 2 1 2 1)
4 Memetic algorithm Parent 2 Sequence: (1 3 2 2 1 3 2 2 3)
Parent 2 Assignment: (3 2 3 1 3 2 1 3 3)
In this section, we describe the main characteristics of the If the selected subset of jobs just includes the job 2, then
memetic algorithm used. Firstly, the genetic algorithm and Offspring 1 Sequence: (2 1 2 3 2 1 3 2 3)
then the tabu search which is applied to every chromosome Offspring 1 Assignment: (1 3 3 2 2 3 2 2 3)
generated by the genetic algorithm. Offspring 2 Sequence: (1 3 2 2 3 3 2 2 1)
Offspring 2 Assignment: (2 1 3 1 2 1 1 3 1)
4.1 Genetic Algorithm The operator JOX might swap any two operations requir-
We use a conventional genetic algorithm where the initial ing the same machine; this is an implicit mutation effect. For
population is generated at random. Then, the algorithm iter- this reason, we have not used any explicit mutation opera-
ates over a number of steps or generations. In each iteration, tor. Therefore, parameter setting in the experimental study
a new generation is built from the previous one by applying is considerably simplified, as crossover probability is set to
selection, recombination and replacement operators. 1 and mutation probability need not be specified. With this
In the selection phase all chromosomes are grouped into setting, we have obtained results similar to those obtained
pairs, and then each one of these pairs is mated to obtain with a lower crossover probability and a low probability of
two offspring. Tabu search is applied to both offspring, and applying mutation operators. Some authors, for example in
finally the replacement is carried out as a tournament selec- (Essafi, Mati, and Dauzère-Pérès 2008) or (González et al.
tion from each pair of parents and their two offspring. Af- 2012) have already noticed that a mutation operator does not
ter tabu search, a chromosome is rebuilt from the improved play a relevant role in a memetic algorithm.
schedule obtained, so its characteristics can be transferred to To build schedules we have used a simple decoding algo-
subsequent offsprings. This effect of the evaluation function rithm: the operations are scheduled in exactly the same or-
is known as Lamarckian evolution. der as they appear in the chromosome sequence σ. In other

94
Table 1: Summary of results in the SDST-FJSP: SDST-HUdata benchmark
Instance Size Flex. LB IFS GA TS GA+TS T(s.)
la01 10 × 5 1.15 609 726 801 (817) 721(*) (724) 721(*) (724) 6
la02 10 × 5 1.15 655 749 847 (870) 737(*) (738) 737(*) (737) 7
la03 10 × 5 1.15 550 652 760 (789) 652 (652) 652 (652) 7
la04 10 × 5 1.15 568 673 770 (790) 673 (678) 673 (675) 9
la05 10 × 5 1.15 503 603 679 (685) 602(*) (602) 602(*) (602) 8
la06 15 × 5 1.15 833 950 1147 (1165) 956 (961) 953 (957) 12
la07 15 × 5 1.15 762 916 1123 (1150) 912(*) (917) 905(*) (911) 18
la08 15 × 5 1.15 845 948 1167 (1186) 940(*) (951) 940(*) (941) 15
la09 15 × 5 1.15 878 1002 1183 (1210) 1002 (1007) 989(*) (995) 22
la10 15 × 5 1.15 866 977 1127 (1156) 956(*) (960) 956(*) (956) 29
la11 20 × 5 1.15 1087 1256 1577 (1600) 1265 (1273) 1244(*) (1254) 33
la12 20 × 5 1.15 960 1082 1365 (1406) 1105 (1119) 1098 (1107) 26
la13 20 × 5 1.15 1053 1215 1473 (1513) 1210(*) (1223) 1205(*) (1212) 24
la14 20 × 5 1.15 1123 1285 1549 (1561) 1267(*) (1277) 1257(*) (1263) 27
la15 20 × 5 1.15 1111 1291 1649 (1718) 1284(*) (1297) 1275(*) (1282) 29
la16 10 × 10 1.15 892 1007 1256 (1269) 1007 (1007) 1007 (1007) 12
la17 10 × 10 1.15 707 858 1007 (1059) 851(*) (851) 851(*) (851) 12
la18 10 × 10 1.15 842 985 1146 (1184) 985 (988) 985 (992) 10
la19 10 × 10 1.15 796 956 1166 (1197) 951(*) (955) 951(*) (951) 16
la20 10 × 10 1.15 857 997 1194 (1228) 997 (997) 997 (997) 12
MRE 16.29 38.81 (42.27) 15.93 (16.49) 15.55 (15.92)
#best 7 0 12 18
Values in bold are best known solutions, (*) improves previous best known solution.

words, we produce a semiactive schedule, which means that 5 Experimental study


no operation can start earlier without altering the operation We have conducted an experimental study across bench-
sequence for a given machine assignment. marks of common use for both problems, the SDST-
FJSP and the FJSP. In both cases our memetic algorithm
4.2 Tabu Search (GA+TS), was given a population of 100 chromosomes and
stopped after 20 generations without improving the best so-
Tabu search (TS) is an advanced local search technique, pro-
lution of the population. Also, the stopping criterion for tabu
posed in (Glover 1989a) and (Glover 1989b), which can es-
search is set to 400 iterations without improvement. We have
cape from local optima by selecting non-improving neigh-
implemented our method in C++ on a PC with Intel Core 2
bors. To avoid revisiting recently visited solutions and ex-
Duo at 2.66 GHz and 2 Gb RAM.
plore new promising regions of the search space, it maintains
a tabu list with a set of moves which are not allowed when We also show the results produced by the genetic algo-
generating the new neighborhood. TS has a solid record of rithm (GA) and the TS approaches separately, to compare
good empirical performance in problem solving. For exam- each of them with the hybridized approach. GA was given
ple, the i − T SAB algorithm from (Nowicki and Smutnicki a population of 100 chromosomes and the stopping crite-
2005) is one of the best approaches for the JSP. TS is often rion is the run time used by GA+TS (i.e., it is not the maxi-
used in combination with other metaheuristics. mum number of generations). For running TS alone we have
opted to set the same stopping criterion as in GA+TS (400
The general TS scheme used in this paper is similar to
iterations without improvement), and to launch TS starting
that proposed in (Dell’ Amico and Trubian 1993). In the
from random schedules as many times as possible in the run
first step the initial solution (provided by the genetic algo-
time used by GA+TS. We have tried several different config-
rithm, as we have seen) is evaluated. It then iterates over
urations for running GA and TS alone, with similar or worse
a number of steps. At each iteration, a new solution is se-
results than those described here.
lected from the neighborhood of the current solution using
the estimated makespan as selection criterion. A neighbor
is tabu if it is generated by reversing a tabu arc, unless its
5.1 Comparison with the state-of-the-art in the
estimated makespan is better than that of the current best so- SDST-FJSP
lution. Additionally, we use the dynamic length schema for As we have pointed, there are few papers that tackle the
the tabu list and the cycle checking mechanism as it is pro- SDST-FJSP. To our knowledge, the most representative ap-
posed in (Dell’ Amico and Trubian 1993). TS finishes after proach to this problem is the iterative flattening search (IFS)
a number of iterations without improvement, returning the proposed in (Oddi et al. 2011). So we choose this method
best solution reached so far. to compare with our proposal. We consider the same bench-

95
Table 2: Summary of results in the FJSP: DP Benchmark
Instance Size Flex. LB TS hGA CDDS GA+TS T(s.)
01a 10 × 5 1.13 2505 2518 (2528) 2518 (2518) 2518 (2525) 2505(*) (2511) 74
02a 10 × 5 1.69 2228 2231 (2234) 2231 (2231) 2231 (2235) 2232 (2234) 120
03a 10 × 5 2.56 2228 2229 (2230) 2229 (2229) 2229 (2232) 2229 (2230) 143
04a 10 × 5 1.13 2503 2503 (2516) 2515 (2518) 2503 (2510) 2503 (2504) 72
05a 10 × 5 1.69 2189 2216 (2220) 2217 (2218) 2216 (2218) 2219 (2221) 123
06a 10 × 5 2.56 2162 2203 (2206) 2196 (2198) 2196 (2203) 2200 (2204) 157
07a 15 × 8 1.24 2187 2283 (2298) 2307 (2310) 2283 (2296) 2266(*) (2286) 201
08a 15 × 8 2.42 2061 2069 (2071) 2073 (2076) 2069 (2069) 2072 (2075) 197
09a 15 × 8 4.03 2061 2066 (2067) 2066 (2067) 2066 (2067) 2066 (2067) 291
10a 15 × 8 1.24 2178 2291 (2306) 2315 (2315) 2291 (2303) 2267(*) (2273) 240
11a 15 × 8 2.42 2017 2063 (2066) 2071 (2072) 2063 (2072) 2068 (2071) 222
12a 15 × 8 4.03 1969 2034 (2038) 2030 (2031) 2031 (2034) 2037 (2041) 266
13a 20 × 10 1.34 2161 2260 (2266) 2257 (2260) 2257 (2260) 2271 (2276) 241
14a 20 × 10 2.99 2161 2167 (2168) 2167 (2168) 2167 (2179) 2169 (2171) 340
15a 20 × 10 5.02 2161 2167 (2167) 2165 (2165) 2165 (2170) 2166 (2166) 470
16a 20 × 10 1.34 2148 2255 (2259) 2256 (2258) 2256 (2258) 2266 (2271) 253
17a 20 × 10 2.99 2088 2141 (2144) 2140 (2142) 2140 (2146) 2147 (2150) 333
18a 20 × 10 5.02 2057 2137 (2140) 2127 (2131) 2127 (2132) 2138 (2141) 488
MRE 2.01 (2.24) 2.12 (2.19) 1.94 (2.19) 1.99 (2.17)
#best 9 10 13 6
Values in bold are best known solutions, (*) improves previous best known solution.

mark used in that paper, which is denoted SDST-HUdata. It in bold the best known solutions, and we mark with a “(*)”
consists of 20 instances derived from the first 20 instances of when a method improves the previous best known solution.
the data subset of the FJSP benchmark proposed in (Hurink, We notice that GA alone obtains very poor results, with a
Jurisch, and Thole 1994). Each instance was created by MRE of 38.81% for the best solutions. TS alone performs
adding to the original instance one setup time matrix str for much better than GA, with a MRE of 15.93% for the best
each machine r. The same setup time matrix was added for solutions, and was able to reach the best known solution in
each machine in all benchmark instances. Each matrix has 12 of the 20 instances. However, the hybridized approach
size n×n, and the value strij indicates the setup time needed obtains even better results. In fact, GA+TS obtains a better
to reconfigure the machine r when switches from job i to job average makespan than TS in 13 instances, the same in 6
j. These setup times are sequence dependent and they fulfill instances and a worse average in only 1 instance. This shows
the triangle inequality. the good synergy between the two metaheuristics.
IFS is implemented in Java and run on a AMD Phenom II Overall, compared to IFS, GA+TS establishes new best
X4 Quad 3.5 Ghz under Linux Ubuntu 10.4.1, with a max- solutions for 13 instances, reaches the same best known so-
imum CPU time limit set to 800 seconds for all runs. We lution for 5 instances and for the instances la06 and la12 the
are considering here the best makespan reported in (Oddi et solution reached is worse than the current best known solu-
al. 2011) for each instance, regardless of the configuration tion. Regarding the average makespan, it is better than the
used. best solution obtained by IFS in 13 instances, it is the equal
Table 1 shows the results of the experiments in the SDST- in 3 instances and it is worse in 4 instances. IFS achieved a
HUdata benchmark. In particular we indicate for each in- MRE of 16.29%, while GA+TS achieved a MRE of 15.55%
stance the name, the size (n × m), the flexibility (i.e. the for the best solutions and 15.92% for the average values.
average number of available machines per operation) and Additionally, the CPU time of GA+TS (between 6 and 33
a lower bound LB. The lower bounds are those reported seconds per run depending on the instance) is lower than
in (Mastrolilli and Gambardella 2000) for the original in- that of IFS (800 seconds per run). However CPU times are
stances without setups, therefore they are probably far from not directly comparable due to the differences in program-
the optimal solutions. For IFS we indicate the best results ming languages, operating systems and target machines. In
reported in (Oddi et al. 2011), and for GA, TS and GA+TS conclusion, GA+TS is better than GA and TS alone, and it
we indicate the best and average makespan in 10 runs for is quite competitive with IFS.
each instance. We also show the runtime in seconds of a sin-
gle run of our algorithms. Additionally, we report the MRE 5.2 Comparison with the state-of-the-art in the
(Mean Relative Error) for each method, calculated as fol-
FJSP
lows: M RE = (Cmax − LB)/LB × 100. Finally, in the
bottom line we indicate the number of instances for which In the FJSP it is easier to compare with the state-of-the-
a method obtains the best known solution (#best). We mark art, because of the number of existing works. We con-

96
Table 3: Summary of results in the FJSP: BC Benchmark
Instance Size Flex. LB TS hGA CDDS GA+TS T(s.)
mt10c1 10 × 11 1.10 655 928 (928) 927 (927) 928 (929) 927 (927) 14
mt10cc 10 × 12 1.20 655 910 (910) 910 (910) 910 (911) 908(*) (909) 14
mt10x 10 × 11 1.10 655 918 (918) 918 (918) 918 (918) 918 (922) 18
mt10xx 10 × 12 1.20 655 918 (918) 918 (918) 918 (918) 918 (918) 16
mt10xxx 10 × 13 1.30 655 918 (918) 918 (918) 918 (918) 918 (918) 19
mt10xy 10 × 12 1.20 655 906 (906) 905 (905) 906 (906) 905 (905) 16
mt10xyz 10 × 13 1.30 655 847 (850) 849 (849) 849 (851) 849 (850) 21
setb4c9 15 × 11 1.10 857 919 (919) 914 (914) 919 (919) 914 (914) 22
setb4cc 15 × 12 1.20 857 909 (912) 914 (914) 909 (911) 907(*) (907) 22
setb4x 15 × 11 1.10 846 925 (925) 925 (931) 925 (925) 925 (925) 18
setb4xx 15 × 12 1.20 846 925 (926) 925 (925) 925 (925) 925 (925) 19
setb4xxx 15 × 13 1.30 846 925 (925) 925 (925) 925 (925) 925 (925) 20
setb4xy 15 × 12 1.20 845 916 (916) 916 (916) 916 (916) 910(*) (910) 25
setb4xyz 15 × 13 1.30 838 905 (908) 905 (905) 905 (907) 905 (905) 19
seti5c12 15 × 16 1.07 1027 1174 (1174) 1175 (1175) 1174 (1175) 1171(*) (1173) 41
seti5cc 15 × 17 1.13 955 1136 (1136) 1138 (1138) 1136 (1137) 1136 (1137) 40
seti5x 15 × 16 1.07 955 1201 (1204) 1204 (1204) 1201 (1202) 1199(*) (1200) 43
seti5xx 15 × 17 1.13 955 1199 (1201) 1202 (1203) 1199 (1199) 1197(*) (1198) 38
seti5xxx 15 × 18 1.20 955 1197 (1198) 1204 (1204) 1197 (1198) 1197 (1197) 40
seti5xy 15 × 17 1.13 955 1136 (1136) 1136 (1137) 1136 (1138) 1136 (1137) 39
seti5xyz 15 × 18 1.20 955 1125 (1127) 1126 (1126) 1125 (1125) 1127 (1128) 41
MRE 22.53 (22.63) 22.61 (22.66) 22.54 (22.60) 22.42 (22.49)
#best 12 11 11 19
Values in bold are best known solutions, (*) improves previous best known solution.

sider several sets of problem instances: the DP benchmark pose, therefore we report the best and average solutions for
proposed in (Dauzère-Pérès and Paulli 1997) with 18 in- the method. The authors set the maximum CPU time to 15
stances, the BC benchmark proposed in (Barnes and Cham- seconds for all test instances except for DP benchmark, in
bers 1996) with 21 instances, and the BR benchmark pro- which the maximum CPU time is set to 200 seconds.
posed in (Brandimarte 1993) with 10 instances. GA+TS was run 10 times for each instance. The run time
We are comparing GA+TS with the tabu search (TS) of of the algorithm is in direct ratio with the size and flexibil-
(Mastrolilli and Gambardella 2000), the hybrid genetic algo- ity of the instance. The CPU times range from 72 to 488
rithm (hGA) of (Gao, Sun, and Gen 2008) and the climbing seconds in the DP benchmark, from 14 to 43 seconds in the
depth-bounded discrepancy search (CDDS) of (Hmida et al. BC benchmark, and from 6 to 112 seconds in the BR bench-
2010). These three methods are, as far as we know, the best mark. Therefore, the run times are similar to that of the other
existing approaches. methods, although they are not directly comparable due to
TS was coded in C++ on a 266 MHz Pentium. They exe- the differences in languages and target machines. However,
cute 5 runs per instance and they limit the maximum number we have seen that in easy instances the best solution is found
of iterations between 100000 and 500000 depending on the in the very first generations, therefore, for many of these in-
instance. With this configuration they report run times be- stances GA+TS requires a much smaller CPU time to reach
tween 28 and 150 seconds in the DP benchmark, between 1 the same solution.
and 24 seconds in the BC benchmark, and between 0.01 and Tables 2, 3 and 4 show the results of the experiments in the
8 seconds in the BR benchmark. DP benchmark, BC benchmark and BR benchmark, respec-
hGA was implemented in Delphi on a 3.0 GHz Pentium. tively. As we did for Table 1, we indicate for each instance
Depending on the complexity of the problems, the popula- the name, size, flexibility, and the lower bound reported in
tion size of hGA ranges from 300 to 3000, and the number of (Mastrolilli and Gambardella 2000). Then, for each method
generations is limited to 200. They also execute 5 runs per we report the best and average makespan. We also indicate
instance, and with the described configuration they report the runtime of a single run of GA+TS. And finally, the MRE
run times between 96 and 670 seconds in the DP bench- for each method and the number of instances for which a
mark, between 10 and 72 seconds in the BC benchmark, and method reaches the best known solution.
between 1 and 20 seconds in the BR benchmark. In the DP benchmark GA+TS improves the previous best
CDDS was coded in C on an Intel Core 2 Duo 2.9 GHz known solution in 3 of the 18 instances (01a 07a 10a). It is
PC with 2GB of RAM. It is a deterministic algorithm, how- remarkable that we prove the optimality of the solution 2505
ever in the paper are reported the results of 4 runs per in- for instance 01a, as this is also its lower bound. Moreover,
stance, one for each of the neighborhood structures they pro- the MRE of the average makespan of GA+TS is the best of

97
Table 4: Summary of results in the FJSP: BR Benchmark
Instance Size Flex. LB TS hGA CDDS GA+TS T(s.)
Mk01 10 × 6 2.09 36 40 (40) 40 (40) 40 (40) 40 (40) 6
Mk02 10 × 6 4.10 24 26 (26) 26 (26) 26 (26) 26 (26) 13
Mk03 15 × 8 3.01 204 204 (204) 204 (204) 204 (204) 204 (204) 9
Mk04 15 × 8 1.91 48 60 (60) 60 (60) 60 (60) 60 (60) 20
Mk05 15 × 4 1.71 168 173 (173) 172 (172) 173 (174) 172 (172) 17
Mk06 10 × 15 3.27 33 58 (58) 58 (58) 58 (59) 58 (58) 54
Mk07 20 × 5 2.83 133 144 (147) 139 (139) 139 (139) 139 (139) 40
Mk08 20 × 10 1.43 523 523 (523) 523 (523) 523 (523) 523 (523) 14
Mk09 20 × 10 2.53 299 307 (307) 307 (307) 307 (307) 307 (307) 26
Mk10 20 × 15 2.98 165 198 (199) 197 (197) 197 (198) 199 (200) 112
MRE 15.41 (15.83) 14.92 (14.92) 14.98 (15.36) 15.04 (15.13)
#best 7 10 9 9
Values in bold are best known solutions.

the four algorithms considered (2.17% versus 2.19%, 2.19% jective is to minimize the makespan. We have proposed
and 2.24%). The MRE of the best makespan is the second a neighborhood structure termed N1SF which is the union
best of the four algorithms, although in this case we have to of two structures: N1S , designed to modify the sequenc-
be aware that our algorithm has some advantage since we ing of the tasks, and N1F , designed to modify the machine
launched it more times for each instance. assignations. This structure has then been used in a tabu
In the BC benchmark we are able to improve the previ- search algorithm, which is embedded in a genetic algorithm
ous best known solution in 6 of the 21 instances (mt10cc, framework. We have defined methods for estimating the
setb4cc, setb4xy, seti5c12, seti5x and seti5xx). Addition- makespan of both neighborhoods and empirically shown
ally, the MRE of both the best and average makespan of that they are very accurate. In the experimental study we
GA+TS is the best of the four algorithms considered. In par- have shown that the hybridized approach (GA+TS) is better
ticular, considering the best makespan our MRE is 22.42% than each method alone. We have also compared our ap-
(versus 22.53%, 22.54% and 22.61%), and considering the proach against state-of-the-art algorithms, both in FJSP and
average makespan our MRE is 22.49% (versus 22.60%, SDST-FJSP benchmarks. In the SDST-FJSP we have com-
22.63% and 22.66%). Moreover, the number of instances pared GA+TS with the algorithm proposed in (Oddi et al.
for which we obtain the best known solution is the best of 2011) across the 20 instances described in that paper. For the
the four methods: 19 of the 21 instances. FJSP we have considered three sets of instances, comparing
In the BR benchmark we obtain the best known solution GA+TS with the methods proposed in (Gao, Sun, and Gen
in 9 of the 10 instances; only hGA is able to improve that 2008), (Mastrolilli and Gambardella 2000) and (Hmida et al.
number. Regarding the MRE, GA+TS is second best con- 2010). Our proposal compares favorably to state-of-the-art
sidering the average makespan and third best considering the methods in both problems considered, and we are able to im-
best makespan. We can conclude that BR benchmark is gen- prove the best known solution in 9 of the 49 FJSP instances
erally easy, as for most instances all four methods reached and in 13 of the 20 SDST-FJSP instances.
the best known solution in every run. In many of these in- As future work we plan to experiment across the FJSP
stances GA+TS reached the best known solution in the first benchmark proposed in (Hurink, Jurisch, and Thole 1994),
generation of each run. and also across the SDST-FJSP benchmark proposed in
Overall, GA+TS shows a lower efficiency in the largest (Saidi-Mehrabad and Fattahi 2007). We also plan to de-
and most flexible instances, compared to the other algo- fine a new SDST-FJSP benchmark with bigger instances and
rithms. In our opinion this is due to the fact that GA+TS with more flexibility than the ones proposed in (Oddi et al.
needs more run time to improve the obtained solution in 2011). We shall also consider different crossover operators
these instances. Additionally, regarding run times in the and other metaheuristics like scatter search with path relink-
FJSP benchmarks we have to notice that our algorithm is ing. Finally, we plan to extend our approach to other variants
at disadvantage, because it does all the necessary setup cal- of scheduling problems which are even closer to real envi-
culations even when the setup times are zero, as it occurs in ronments, for instance, problems with uncertain durations,
these instances. or problems considering alternative objective functions such
In summary, we can conclude that GA+TS is competitive as weighted tardiness.
with the state-of-the-art and was able to obtain new upper
bounds for 9 of the 49 instances considered. Acknowledgements
We would like to thank Angelo Oddi and Riccardo Rasconi
6 Conclusions for providing us with the SDST-FJSP instances. All authors
We have considered the flexible job shop scheduling prob- are supported by Grant MEC-FEDER TIN2010-20976-C02-
lem with sequence-dependent setup times, where the ob- 02.

98
References Nowicki, E., and Smutnicki, C. 2005. An advanced tabu
search algorithm for the job shop problem. Journal of
Barnes, J., and Chambers, J. 1996. Flexible job shop
Scheduling 8:145–159.
scheduling by tabu search. Technical Report Series: ORP96-
09, Graduate program in operations research and industrial Oddi, A.; Rasconi, R.; Cesta, A.; and Smith, S. 2011. Ap-
engineering. The University of Texas at Austin. plying iterative flattening search to the job shop scheduling
problem with alternative resources and sequence dependent
Bierwirth, C. 1995. A generalized permutation approach to setup times. In COPLAS 2011 Proceedings of the Work-
jobshop scheduling with genetic algorithms. OR Spectrum shop on Constraint Satisfaction Techniques for Planning
17:87–92. and Scheduling Problems.
Brandimarte, P. 1993. Routing and scheduling in a flexible Saidi-Mehrabad, M., and Fattahi, P. 2007. Flexible job shop
job shop by tabu search. Annals of Operations Research scheduling with tabu search algorithms. Int J Adv Manuf
41:157–183. Technol 32:563–570.
Brucker, P., and Thiele, O. 1996. A branch and bound Taillard, E. 1993. Parallel taboo search techniques for the
method for the general-job shop problem with sequence- job shop scheduling problem. ORSA Journal on Computing
dependent setup times. Operations Research Spektrum 6:108–117.
18:145–161.
Van Laarhoven, P.; Aarts, E.; and Lenstra, K. 1992. Job shop
Dauzère-Pérès, S., and Paulli, J. 1997. An integrated ap- scheduling by simulated annealing. Operations Research
proach for modeling and solving the general multiprocessor 40:113–125.
job-shop scheduling problem using tabu search. Annals of
Vela, C. R.; Varela, R.; and González, M. A. 2010. Lo-
Operations Research 70(3):281–306.
cal search and genetic algorithm for the job shop scheduling
Dell’ Amico, M., and Trubian, M. 1993. Applying tabu problem with sequence dependent setup times. Journal of
search to the job-shop scheduling problem. Annals of Oper- Heuristics 16:139–165.
ational Research 41:231–252.
Essafi, I.; Mati, Y.; and Dauzère-Pérès, S. 2008. A genetic
local search algorithm for minimizing total weighted tardi-
ness in the job-shop scheduling problem. Computers and
Operations Research 35:2599–2616.
Gao, J.; Sun, L.; and Gen, M. 2008. A hybrid genetic
and variable neighborhood descent algorithm for flexible job
shop scheduling problems. Computers and Operations Re-
search 35:2892–2907.
Glover, F. 1989a. Tabu search–part I. ORSA Journal on
Computing 1(3):190–206.
Glover, F. 1989b. Tabu search–part II. ORSA Journal on
Computing 2(1):4–32.
González, M. A.; González-Rodrı́guez, I.; Vela, C.; and
Varela, R. 2012. An efficient hybrid evolutionary algorithm
for scheduling with setup times and weighted tardiness min-
imization. Soft Computing 16(12):2097–2113.
González, M. A.; Vela, C.; and Varela, R. 2012. A com-
petent memetic algorithm for complex scheduling. Natural
Computing 11:151–160.
Hmida, A.; Haouari, M.; Huguet, M.; and Lopez, P. 2010.
Discrepancy search for the flexible job shop scheduling
problem. Computers and Operations Research 37:2192–
2201.
Hurink, E.; Jurisch, B.; and Thole, M. 1994. Tabu search
for the job shop scheduling problem with multi-purpose ma-
chine. Operations Research Spektrum 15:205–215.
Mastrolilli, M., and Gambardella, L. 2000. Effective neigh-
borhood functions for the flexible job shop problem. Journal
of Scheduling 3(1):3–20.
Mattfeld, D. 1995. Evolutionary Search and the Job
Shop: Investigations on Genetic Algorithms for Production
Scheduling. Springer-Verlag.

99

You might also like