A Hybrid Ant Algorithm For The Airline Crew

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

A Hybrid Ant Algorithm for the Airline Crew

Pairing Problem

Broderick Crawford1,2, Carlos Castro2, and Eric Monfroy2,3,


1
Pontificia Universidad Católica de Valparaı́so, PUCV, Chile
[email protected]
2
Universidad Técnica Federico Santa Marı́a, Valparaı́so, Chile
[email protected]
3
LINA, Université de Nantes, France
[email protected]

Abstract. This article analyzes the performance of Ant Colony Op-


timization algorithms on the resolution of Crew Pairing Problem, one
of the most critical processes in airline management operations. Fur-
thermore, we explore the hybridization of Ant algorithms with Con-
straint Programming techniques. We show that, for the instances tested
from Beasley’s OR-Library, the use of this kind of hybrid algorithms ob-
tains good results compared to the best performing metaheuristics in the
literature.

Keywords: Ant Colony Optimization, Constraint Programming, Hybrid


Algorithm, Crew Pairing Optimization, Set Partitioning Problem.

1 Introduction

Crew pairing is one of the most critical processes in airline management opera-
tions. Taking a long term flight schedule as input, the objective of this process
is to partition without breaking constraints (rules and regulations) the schedule
of airline flights into individual flight sequences called pairings. A pairing is a
sequence of flight legs for an unspecified crew member starting and finishing at
the same city. The problem has attracted many people (managers and scien-
tists) in recent decades. The main challenge is that there is no general method
to work well with all kinds of non linear cost functions and constraints (hard and
soft). Furthermore, this problem becomes more complicated with the increasing
size of the input. The pairing problem can be formulated as a Set Partitioning
Problem (SPP) or equality-constrained as a Set Covering Problem (SCP), in
this formulation the rows are flights and the columns are pairings [3]. In this
work, we solve some test instances of Airline Flight Crew Scheduling with Ant
Colony Optimization (ACO) algorithms and some hybridizations of ACO with

The authors have been partially supported by the project INRIA-CONICYT
VANANAA. The first author has also been partially supported by the project PUCV
209.473/2006. The third author has also been partially supported by the Chilean Na-
tional Science Fund through the project FONDECYT 1060373.

A. Gelbukh and C.A. Reyes-Garcia (Eds.): MICAI 2006, LNAI 4293, pp. 381–391, 2006.

c Springer-Verlag Berlin Heidelberg 2006
382 B. Crawford, C. Castro, and E. Monfroy

Constraint Programming (CP) techniques like Forward Checking. The computa-


tional results that we have obtained show a good behaviour in comparison with
performing metaheuristics in the literature [6,18,15].
There exist some problems for which the effectiveness of ACO is limited,
among them the strongly constrained problems. Those are problems for which
neighbourhoods contain few solutions, or none at all, and local search has a very
limited use. Probably, the most significant of those problems is the SPP and a
direct implementation of the basic ACO framework is unable of obtaining feasible
solutions for many SPP standard tested instances [19]. The best performing
metaheuristic for SPP is a genetic algorithm due to Chu and Beasley [6,5]. There
already exists some first approaches applying ACO to the SCP. In [1,16] ACO
has only been used as a construction algorithm and the approach has only been
tested on some small SCP instances. More recent works [14,17,13] apply Ant
Systems to the SCP and related problems using techniques to remove redundant
columns and local search to improve solutions. Taking into account these results,
it seems that the incomplete approach of Ant Systems could be considered as a
good alternative to solve these problems when complete techniques are not able
to get the optimal solution in a reasonable time.
In this paper, we explore the addition of a lookahead mechanism to the two
main ACO algorithms: Ant System (AS) and Ant Colony System (ACS). Trying
to solve larger instances of SPP with AS or ACS implementations derives in
a lot of unfeasible labelling of variables, and the ants can not obtain complete
solutions using the classic transition rule when they move in their neighbour-
hood. In this paper, we propose the addition of a lookahead mechanism in the
construction phase of ACO thus only feasible partial solutions are generated.
The lookahead mechanism allows the incorporation of information about the
instantiation of variables after the current decision. This idea differs from the
one proposed by [21] and [12], those authors propose a lookahead function
evaluating the pheromone in the Shortest Common Supersequence Problem and
estimating the quality of a partial solution of a Industrial Scheduling Problem,
respectively. This paper is organised as follows: Section 2 is dedicated to the
presentation of the problem and its mathematical model. In Section 3, we de-
scribe the applicability of the ACO algorithms for solving SPP and an example
of Constraint Propagation is given. In Section 4, we present the basic concepts
to adding Constraint Programming techniques to the two basic ACO algorithms:
AS and ACS. In Section 5, we present results when adding Constraint Program-
ming techniques to the two basic ACO algorithms to solve some Airline Flight
Crew Scheduling taken from NorthWest Airlines benchmarks available in the
OR-Library of Beasley [4]. Furthermore, our results are compared with the best
performing non-ACO metaheuristcis. Finally, in Section 6 we conclude the paper
and give some perspectives for future research.

2 Problem Description
One of the most challenging cases of operational planning, scheduling and con-
trolling may be found in the airline industry. The efficient management of
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 383

operations has become more challenging and complex with the passage of time,
and this industry is constantly striving to maximize profits within a competi-
tive environment. Although Operations Research and Artificial Intelligence tools
have been applied for several decades, its problems are still challenging scientists
and software engineers. The size of these problems is increasing and restrictions
on them are becoming more and more complicated.
It is supposed that a timetable of flights operated in a schedule period ex-
ists already to match the expectations of the market demands. Then, there are
planning and scheduling tasks for aircraft and crews. The first problem is called
Fleet Assignment Problem and has the timetable as input. The results of the fleet
assignment problem are: the exact departure time for each flight leg and the se-
quence of flight legs for an aircraft. Without considering the fuel costs, the most
important direct operating cost is the personnel. Therefore, a second problem
called Crew Scheduling Problem is very important. This problem is often divided
into two smaller problems: Crew Pairing Problem and Crew Rostering Problem
(also called Crew Assignment Problem). The crew pairing problem takes the
scheduled flights which were fixed by the fleet assignment step as input. Instead
of assigning aircraft, the aim now is to allocate crews to cover all flight legs and
maximize an objective function. In the crew pairing process, planners do not
consider individual crew and the scheduling is often applied for a period and
the result of this process can be used for other periods. The flights are grouped
into small sets called pairings (or rotations) which must start from a home base
and end at that base. The rostering process will do the remaining task to assign
an individual crew to a flight leg. All published methods attempt to separate
the problem of generating pairings from the problem of selecting the best subset
of these pairings. The remaining optimization problem is then modelled under
the assumption that the set of feasible pairings and their costs are explicitly
available, and can be expressed as a Set Partitioning Problem. The SPP model
is valid for the daily problem as well as the weekly problem and the fully dated
problem.
SPP is the NP-complete problem of partitioning a given set into mutually
independent subsets while minimizing a cost function defined as the sum of the
costs associated to each of the eligible subsets. In the SPP matrix formulation
we are given a m × n matrix A = (aij ) in which all the matrix elements are
either zero or one. Additionally, each column is given a non-negative cost cj . We
say that a column j can cover a row i if aij = 1. Let J denotes the set of the
columns and xj a binary variable which is one if column j is chosen and zero
otherwise. The SPP can be defined formally as follows:

n

M inimize f (x) = cj × xj (1)
j=1

n

Subject to aij × xj = 1; ∀i = 1, . . . , m (2)
j=1
384 B. Crawford, C. Castro, and E. Monfroy

In this formulation, each row represents a flight leg that must be scheduled.
The columns represent pairings. Each pairing is a sequence of flights to be covered
by a single crew over a 2 to 3 day period. It must begin and end in the base city
where the crew resides [22].

3 Ant Colony Optimization for Set Partitioning Problems


In this section, we briefly present ACO algorithms and give a description of their
use to solve SPP. More details about ACO algorithms can be found in [8,9]. The
basic idea of ACO algorithms comes from the capability of real ants to find short-
est paths between the nest and food source. From a Combinatorial Optimization
point of view, the ants are looking for good solutions. Real ants cooperate in
their search for food by depositing pheromone on the ground. An artificial ant
colony simulates this behavior implementing artificial ants as parallel processes
whose role is to build solutions using a randomized constructive search driven by
pheromone trails and heuristic information of the problem. An important topic
in ACO is the adaptation of the pheromone trails during algorithm execution to
take into account the cumulated search experience: reinforcing the pheromone
associated with good solutions and considering the evaporation of the pheromone
on the components over time in order to avoid premature convergence. ACO can
be applied in a very straightforward way to SPP. The columns are chosen as the
solution components and have associated a cost and a pheromone trail [10]. Each
column can be visited by an ant only once and then a final solution has to cover
all rows. A walk of an ant over the graph representation corresponds to the
iterative addition of columns to the partial solution obtained so far. Each ant
starts with an empty solution and adds columns until a cover is completed. A
pheromone trail τj and a heuristic information ηj are associated to each eligible
column j. A column to be added is chosen with a probability that depends of
pheromone trail and the heuristic information. The most common form of the
ACO decision policy (Transition Rule Probability) when ants work with compo-
nents is:
τj ∗ ηjβ
pkj (t) =  / Sk
if j ∈ (3)
β
τl [ηl ]
/ k
l∈S

where S k is the partial solution of the ant k. The β parameter controls how
important is η in the probabilistic decision [10,17].

Pheromone trail τj . One of the most crucial design decisions to be made


in ACO algorithms is the modelling of the set of pheromones. In the original
ACO implementation for TSP the choice was to put a pheromone value on every
link between a pair of cities, but for other combinatorial problems often can
be assigned pheromone values to the decision variables (first order pheromone
values) [10]. In this work the pheromone trail is put on the problems component
(each eligible column j) instead of the problems connections. And setting a good
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 385

pheromone quantity is not a trivial task either. The quantity of pheromone trail
laid on columns is based on the idea: the more pheromone trail on a particular
item, the more profitable that item is [16]. Then, the pheromone deposited in
each component will be in relation to its frequency in the ants solutions. In this
work we divided this frequency by the number of ants obtaining better results.
Heuristic information ηj . In this paper we use a dynamic heuristic in-
formation that depends on the partial solution of an ant. It can be defined as
e
ηj = cjj , where ej is the so called cover value, that is, the number of additional
rows covered when adding column j to the current partial solution, and cj is the
cost of column j. In other words, the heuristic information measures the unit
cost of covering one additional row. An ant ends the solution construction when
all rows are covered.
In this work, we use two instances of ACO: Ant System (AS) and Ant Colony
System (ACS) algorithms, the original and the most famous algorithms in the
ACO family [10]. ACS improves the search of AS using: a different transition
rule in the constructive phase, exploiting the heuristic information in a more
rude form, using a list of candidates to future labelling and using a different
treatment of pheromone. ACS has demonstrated better performance than AS in
a wide range of problems [9]. ACS exploits a pseudo-random transition rule in
the solution construction; ant k chooses the next column j with criteria:
 
Argmaxl∈S / k τl [ηl ]β if q ≤ qo (4)

and following the Transition Rule Probability (equation 3) en otherwise. Where


q is a random number uniformly distributed in [0, 1], and q0 is a parameter that
controls how strongly the ants exploit deterministically the pheromone trail and
the heuristic information.
Trying to solve larger instances of SPP with the original AS or ACS implemen-
tation derives in a lot of unfeasible labelling of variables, and the ants can not
obtain complete solutions. In this paper we explore the addition of a lookahead
mechanism in the construction phase of ACO thus only feasible solutions are
generated. A direct implementation of the basic ACO framework is incapable of
obtaining feasible solution for many SPP instances. An example will be given in
order to explain the ACO difficulties solving SPP. In [22] is showed Table 1 with
a Flight Schedule for American Airlines. The table enumerates possible pairings,
or sequence of flights to be covered by a single crew over a 2 to 3 day period, and
its costs. A pairing must begin and end in the base city where the crew resides.
For example, pairing j = 1 begins at a known city (Miami in the [22] example)
with flight 101 (Miami-Chicago). After a layover in Chicago the crew covers
flight 203 (Chicago-Dallas) and then flight 406 (Dallas-Charlotte) to Charlotte.
Finally, flight 308 (Charlotte-Miami) returns them to Miami. The total cost of
pairing j = 1 is $ 2900.
Having enumerated a list of pairings like Table 1, the remaining task is to
find a minimum total cost collection of columns staffing each flight exactly once.
Defining the decision variables xj equal to 1 if pairing j is chosen and 0 otherwise,
the corresponding SPP model must to be solved.
386 B. Crawford, C. Castro, and E. Monfroy

Table 1. Possible Pairings for AA Example

Pairing j Flight Sequence Cost $


1 101-203-406-308 2900
2 101-203-407 2700
3 101-204-305-407 2600
4 101-204-308 3000
5 203-406-310 2600
6 203-407-109 3150
7 204-305-407-109 2550
8 204-308-109 2500
9 305-407-109-212 2600
10 308-109-212 2050
11 402-204-305 2400
12 402-204-310-211 3600
13 406-308-109-211 2550
14 406-310-211 2650
15 407-109-211 2350

Minimize 2900x1 + 2700x2 + 2600x3 + 3000x4 + 2600x5 + 3150x6 + 2550x7 +

2500x8 + 2600x9 + 2050x10 + 2400x11 + 3600x12 + 2550x13 + 2650x14 + 2350x15


Subject to

x1 + x2 + x3 + x4 = 1 (f light 101)
x6 + x7 + x8 + x9 + x10 + x13 + x15 = 1 (f light 109)
x1 + x2 + x5 + x6 = 1 (f light 203)
x3 + x4 + x7 + x8 + x11 + x12 = 1 (f light 204)
x12 + x13 + x14 + x15 = 1 (f light 211)
x9 + x10 = 1 (f light 212)
x3 + x7 + x9 + x11 = 1 (f light 305)
x1 + x4 + x8 + x10 + x13 = 1 (f light 308)
x5 + x12 + x14 = 1 (f light 310)
x11 + x12 = 1 (f light 402)
x1 + x5 + x13 + x14 = 1 (f light 406)
x2 + x3 + x6 + x7 + x9 + x15 = 1 (f light 407)
xj = 0 or 1; ∀j = 1, . . . , 15

An optimal solution of this problem, at cost of $ 9100, is x∗1 = x∗9 = x∗12 = 1


and all other x∗j = 0.
Applying ACO to the American Airlines Example. Each ant starts
with an empty solution and adds columns until a cover is completed. But to
determine if a column actually belongs or not to the partial solution (j ∈ S k )
is not good enough. The traditional ACO decision policy, Equation 3, does not
work for SPP because the ants, in this traditional selection process of the next
columns, ignore the information of the problem constraints. For example, let
us suppose that at the beginning an ant chooses the pairing or column number
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 387

14, then x14 is instantiated with the value 1. For instance, if x14 is instanti-
ated, the consideration of the constraints that contain x14 may have important
consequences:

– Checking constraint of flight 211, if x14 = 1 then x12 = x13 = x15 = 0.


– Checking constraint of flight 310, if x14 = 1 then x5 = x12 = 0.
– Checking constraint of flight 406, if x14 = 1 then x1 = x5 = x13 = 0.
– If x12 = 0, considering the flight 402 constraint then x11 = 1.
– If x11 = 1, considering the flight 204 constraint then x3 = x4 = x7 = x8 = 0;
and by the flight 305 constraint then x3 = x7 = x9 = 0.
– If x9 = 0, considering the flight 212 constraint then x10 = 1.
– If x10 = 1, by the flight 109 constraint x6 = x7 = x8 = x9 = x13 = x15 = 0;
and considering the flight 308 constraint x1 = x4 = x8 = x13 = 0.

All the information above, where the only variable uninstantiated after a sim-
ple propagation of constraints was x2 , is ignored by the probabilistic transition
rule of the ants. And in the worst case, in the iterative steps is possible to assign
values to some variable that will make impossible to obtain complete solutions.
The procedure that we showed above is similar to the Constraint Propagation
technique. Constraint Propagation is an efficient inference mechanism based on
the use of the information in the constraints that can be found under differ-
ent names: Constraint Relaxation, Filtering Algorithms, Narrowing Algorithms,
Constraint Inference, Simplification Algorithms, Label Inference, Local Consis-
tency Enforcing, Rules Iteration, Chaotic Iteration. Constraint Propagation em-
beds any reasoning which consists in explicitly forbidding values or combinations
of values for some variables of a problem because a given subset of its constraints
cannot be satisfied otherwise. The algorithm proceeds as follows: when a value
is assigned to a variable, the algorithm recomputes the possible value sets and
assigned values of all its dependent variables (variable that belongs to the same
constraint). This process continues recursively until no more changes can be
done. More specifically, when a variable xm changes its value, the algorithm
evaluates the domain expression of each variable xn dependent on xm . This may
generate a new set of possible values for xn . If this set changes, a constraint is
evaluated selecting one of the possible values as the new assigned value for xn .
It causes the algorithm to recompute the values for further downstream vari-
ables. In the case of binary variables the constraint propagation works very fast
in strongly constrained problems like SPP. The two basic techniques of Con-
straint Programming are Constraint Propagation and Constraint Distribution.
The problem cannot be solved using Constraint Propagation alone, Constraint
Distribution or Search is required to reduce the search space until Constraint
Propagation is able to determine the solution. Constraint Distribution splits a
problem into complementary cases once Constraint Propagation cannot advance
further. By iterating propagation and distribution, propagation will eventually
determine the solutions of a problem [2].
388 B. Crawford, C. Castro, and E. Monfroy

4 ACO with Constraint Programming

Recently, some efforts have been done in order to integrate Constraint Program-
ming techniques to ACO algorithms [20,11]. A hybridization of ACO and CP
can be approached from two directions: we can either take ACO or CP as the
base algorithm and try to embed the respective other method into it. A form to
integrate CP into ACO is to let it reduce the possible candidates among the not
yet instantiated variables participating in the same constraints that the actual
variable. A different approach would be to embed ACO within CP. The point at
which ACO can interact with CP is during the labelling phase, using ACO to
learn a value ordering that is more likely to produce good solutions.

1 Procedure ACO+CP_for_SPP
2 Begin
3 InitParameters();
4 While (remain iterations) do
5 For k := 1 to nants do
6 While (solution is not completed) and TabuList <> J do
7 Choose next Column j with Transition Rule Probability
8 For each Row i covered by j do /* constraints with j */
9 feasible(i):= Posting(j); /* Constraint Propagation */
10 EndFor
11 If feasible(i) for all i then AddColumnToSolution(j)
12 else Backtracking(j); /* set j uninstantiated */
13 AddColumnToTabuList(j);
14 EndWhile
15 EndFor
16 UpdateOptimum();
17 UpdatePheromone();
18 EndWhile
19 Return best_solution_founded
20 End.

Fig. 1. ACO+CP algorithm for SPP

In this work, ACO use CP in the variable selection (when adding columns to
partial solution). The CP algorithm used in this paper is Forward Checking with
Backtracking. The algorithm is a combination of Arc Consistency Technique and
Chronological Backtracking [7]. It performs Arc Consistency between pairs of
a not yet instantiated variable and an instantiated variable, i.e., when a value
is assigned to the current variable, any value in the domain of a future variable
which conflicts with this assignment is removed from the domain. The Forward
Checking procedure, taking into account the constraints network topology (i.e.
wich sets of variables are linked by a constraint and wich are not), guarantees that
at each step of the search, all constraints between already assigned variables and
not yet assigned variables are arc consistent. Then, adding Forward Checking to
ACO for SPP means that columns are chosen if they do not produce any conflict
with the next column to be chosen. In other words, the Forward Checking search
procedure guarantees that at each step of the search, all the constraints between
already assigned variables and not yet assigned variables are arc consistency.
This reduces the search tree and the overall amount of computational work done.
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 389

But it should be noted that in comparison with pure ACO algorithm, Forward
Checking does additional work when each assignment is intended to be added
to the current partial solution. Arc consistency enforcing always increases the
information available on each variable labelling. Figure 1 describes the hybrid
ACO+CP algorithm to solve SPP.

5 Experiments and Results


Table 2 presents the results when adding Forward Checking to the basic ACO
algorithms for solving test instances taken from the OR-Library [4]. It compares
performance with IP optimal, Genetic Algorithm of Chu and Beasley [6], Ge-
netic Algorithm of Levine et al. [18] and the most recent algorithm by Kotecha et
al. [15]. The first five columns of Table 2 present the problem code, the number
of rows (constraints), the number of columns (decision variables), the best known
cost value for each instance (IP optimal), and the density (percentage of ones in
the constraint matrix) respectively. The next three columns present the results
obtained by better performing metaheuristics with respect to SPP. And the last
four columns present the cost obtained when applying Ant Algorithms, AS and
ACS, and combining them with Forward Checking. An entry of ”X” in the table
means no feasible solution was found. The algorithms have been run with the
following parameters settings: influence of pheromone (alpha)=1.0, influence of
heuristic information (beta)=0.5 and evaporation rate (rho)=0.4 as suggested in
[16,17,10]. The number of ants has been set to 120 and the maximum number of
iterations to 160, so that the number of generated candidate solutions is limited
to 19.200. For ACS the list size was 500 and Qo=0.5. Algorithms were imple-
mented using ANSI C, GCC 3.3.6, under Microsoft Windows XP Professional
version 2002.

Table 2. Experimental Results

Problem Rows Columns Optimum Density Beasley Levine Kotecha AS ACS AS+FC ACS+FC
sppnw06 50 6774 7810 18.17 7810 - - 9200 9788 8160 8038
sppnw08 24 434 35894 22.39 35894 37078 36068 X X 35894 36682
sppnw09 40 3103 67760 16.20 67760 - - 70462 X 70222 69332
sppnw10 24 853 68271 21.18 68271 X 68271 X X X X
sppnw12 27 626 14118 20.00 14118 15110 14474 15406 16060 14466 14252
sppnw15 31 467 67743 19.55 67743 - - 67755 67746 67743 67743
sppnw19 40 2879 10898 21.88 10898 11060 11944 11678 12350 11060 11858
sppnw23 19 711 12534 24.80 12534 12534 12534 14304 14604 13932 12880
sppnw26 23 771 6796 23.77 6796 6796 6804 6976 6956 6880 6880
sppnw32 19 294 14877 24.29 14877 14877 14877 14877 14886 14877 14877
sppnw34 20 899 10488 28.06 10488 10488 10488 13341 11289 10713 10797
sppnw39 25 677 10080 26.55 10080 10080 10080 11670 10758 11322 10545
sppnw41 17 197 11307 22.10 11307 11307 11307 11307 11307 11307 11307

The effectiveness of Constraint Programming is showed to solve SPP, because


the SPP is so strongly constrained the stochastic behaviour of ACO can be
improved with lookahead techniques in the construction phase, so that almost
only feasible partial solutions are induced. In the original ACO implementation
390 B. Crawford, C. Castro, and E. Monfroy

the SPP solving derives in a lot of unfeasible labelling of variables, and the ants
can not complete solutions. With respect to the computational results this is
not surprising, because ACO metaheuristics are general purpose tools that will
usually be outperformed when customized algorithms for a problem exist.

6 Conclusions and Future Directions


Our main contribution is the study of the combination of Constraint Program-
ming and Ant Colony Optimization solving benchmarks of the Airline Crew
Pairing Problem formulated as a Set Partitioning Problem. The main conclu-
sion from this work is that we can improve ACO with CP. Computational results
also indicated that our hybridization is capable of generating optimal or near
optimal solutions for many problems. The concept of Arc Consistency plays an
essential role in Constraint Programming as a problem simplification operation
and as a tree pruning technique during search through the detection of local
inconsistencies among the uninstantiated variables. We have shown that it is
possible to add Arc Consistency to any ACO algorithms and the computational
results confirm that the performance of ACO can be improved with this type of
hybridisation. Anyway, a complexity analysis should be done in order to eval-
uate the cost we are adding with this kind of integration. We strongly believe
that this kind of integration between complete and incomplete techniques should
be studied deeply. Future versions of the algorithm will study the pheromone
treatment representation and the incorporation of available techniques in order
to reduce the input problem (Pre Processing) and improve the solutions given
by the ants (Post Processing). The ants solutions may contain expensive compo-
nents which can be eliminated by a fine tuning heuristic after the solution, then
we will explore Post Processing procedures, which consists in the identification
and replacement of the columns of the ACO solution in each iteration by more
effective columns. Besides, the ants solutions can be improved by other local
search methods like Hill Climbing, Simulated Annealing or Tabu Search.

References
1. D. Alexandrov and Y. Kochetov. Behavior of the ant colony algorithm for the set
covering problem. In Proc. of Symp. Operations Research, pages 255–260. Springer
Verlag, 2000.
2. K. R. Apt. Principles of Constraint Programming. Cambridge University Press,
2003.
3. E. Balas and M. Padberg. Set partitioning: A survey. SIAM Review, 18:710–760,
1976.
4. J. E. Beasley. Or-library:distributing test problem by electronic mail. Journal of
Operational Research Society, 41(11):1069–1072, 1990.
5. J. E. Beasley and P. C. Chu. A genetic algorithm for the set covering problem.
European Journal of Operational Research, 94(2):392–404, 1996.
6. P. C. Chu and J. E. Beasley. Constraint handling in genetic algorithms: the set
partitoning problem. Journal of Heuristics, 4:323–357, 1998.
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 391

7. R. Dechter and D. Frost. Backjump-based backtracking for constraint satisfaction


problems. Artificial Intelligence, 136:147–188, 2002.
8. M. Dorigo, G. D. Caro, and L. M. Gambardella. Ant algorithms for discrete
optimization. Artificial Life, 5:137–172, 1999.
9. M. Dorigo and L. M. Gambardella. Ant colony system: A cooperative learning
approach to the traveling salesman problem. IEEE Transactions on Evolutionary
Computation, 1(1):53–66, 1997.
10. M. Dorigo and T. Stutzle. Ant Colony Optimization. MIT Press, USA, 2004.
11. F. Focacci, F. Laburthe, and A. Lodi. Local search and constraint programming.
In Handbook of metaheuristics. Kluwer, 2002.
12. C. Gagne, M. Gravel, and W. Price. A look-ahead addition to the ant colony
optimization metaheuristic and its application to an industrial scheduling prob-
lem. In J. S. et al., editor, Proceedings of the fourth Metaheuristics International
Conference MIC’01, pages 79–84, July 2001.
13. X. Gandibleux, X. Delorme, and V. T’Kindt. An ant colony algorithm for the set
packing problem. In M. D. et al., editor, ANTS 2004, volume 3172 of LNCS, pages
49–60. SV, 2004.
14. R. Hadji, M. Rahoual, E. Talbi, and V. Bachelet. Ant colonies for the set covering
problem. In M. D. et al., editor, ANTS 2000, pages 63–66, 2000.
15. K. Kotecha, G. Sanghani, and N. Gambhava. Genetic algorithm for airline crew
scheduling problem using cost-based uniform crossover. In Second Asian Applied
Computing Conference, AACC 2004, volume 3285 of Lecture Notes in Artificial
Intelligence, pages 84–91, Kathmandu, Nepal, October 2004. Springer.
16. G. Leguizamón and Z. Michalewicz. A new version of ant system for subset prob-
lems. In Congress on Evolutionary Computation, CEC’99, pages 1459–1464, Pis-
cataway, NJ, USA, 1999. IEEE Press.
17. L. Lessing, I. Dumitrescu, and T. Stutzle. A comparison between aco algorithms
for the set covering problem. In M. D. et al., editor, ANTS 2004, volume 3172 of
LNCS, pages 1–12. SV, 2004.
18. D. Levine. A parallel genetic algorithm for the set partitioning problem. Tech-
nical Report ANL-94/23 Argonne National Laboratory, May 1994. Available at
http://citeseer.ist.psu.edu/levine94parallel.html.
19. V. Maniezzo and M. Milandri. An ant-based framework for very strongly con-
strained problems. In M. D. et al., editor, ANTS 2002, volume 2463 of LNCS,
pages 222–227. SV, 2002.
20. B. Meyer and A. Ernst. Integrating aco and constraint propagation. In M. D.
et al., editor, ANTS 2004, volume 3172 of LNCS, pages 166–177. SV, 2004.
21. R. Michel and M. Middendorf. An island model based ant system with lookahead
for the shortest supersequence problem. In Lecture notes in Computer Science,
Springer Verlag, volume 1498, pages 692–701, 1998.
22. R. L. Rardin. Optimization in Operations Research. Prentice Hall, 1998.

You might also like