A Hybrid Ant Algorithm For The Airline Crew
A Hybrid Ant Algorithm For The Airline Crew
A Hybrid Ant Algorithm For The Airline Crew
Pairing Problem
1 Introduction
Crew pairing is one of the most critical processes in airline management opera-
tions. Taking a long term flight schedule as input, the objective of this process
is to partition without breaking constraints (rules and regulations) the schedule
of airline flights into individual flight sequences called pairings. A pairing is a
sequence of flight legs for an unspecified crew member starting and finishing at
the same city. The problem has attracted many people (managers and scien-
tists) in recent decades. The main challenge is that there is no general method
to work well with all kinds of non linear cost functions and constraints (hard and
soft). Furthermore, this problem becomes more complicated with the increasing
size of the input. The pairing problem can be formulated as a Set Partitioning
Problem (SPP) or equality-constrained as a Set Covering Problem (SCP), in
this formulation the rows are flights and the columns are pairings [3]. In this
work, we solve some test instances of Airline Flight Crew Scheduling with Ant
Colony Optimization (ACO) algorithms and some hybridizations of ACO with
The authors have been partially supported by the project INRIA-CONICYT
VANANAA. The first author has also been partially supported by the project PUCV
209.473/2006. The third author has also been partially supported by the Chilean Na-
tional Science Fund through the project FONDECYT 1060373.
A. Gelbukh and C.A. Reyes-Garcia (Eds.): MICAI 2006, LNAI 4293, pp. 381–391, 2006.
c Springer-Verlag Berlin Heidelberg 2006
382 B. Crawford, C. Castro, and E. Monfroy
2 Problem Description
One of the most challenging cases of operational planning, scheduling and con-
trolling may be found in the airline industry. The efficient management of
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 383
operations has become more challenging and complex with the passage of time,
and this industry is constantly striving to maximize profits within a competi-
tive environment. Although Operations Research and Artificial Intelligence tools
have been applied for several decades, its problems are still challenging scientists
and software engineers. The size of these problems is increasing and restrictions
on them are becoming more and more complicated.
It is supposed that a timetable of flights operated in a schedule period ex-
ists already to match the expectations of the market demands. Then, there are
planning and scheduling tasks for aircraft and crews. The first problem is called
Fleet Assignment Problem and has the timetable as input. The results of the fleet
assignment problem are: the exact departure time for each flight leg and the se-
quence of flight legs for an aircraft. Without considering the fuel costs, the most
important direct operating cost is the personnel. Therefore, a second problem
called Crew Scheduling Problem is very important. This problem is often divided
into two smaller problems: Crew Pairing Problem and Crew Rostering Problem
(also called Crew Assignment Problem). The crew pairing problem takes the
scheduled flights which were fixed by the fleet assignment step as input. Instead
of assigning aircraft, the aim now is to allocate crews to cover all flight legs and
maximize an objective function. In the crew pairing process, planners do not
consider individual crew and the scheduling is often applied for a period and
the result of this process can be used for other periods. The flights are grouped
into small sets called pairings (or rotations) which must start from a home base
and end at that base. The rostering process will do the remaining task to assign
an individual crew to a flight leg. All published methods attempt to separate
the problem of generating pairings from the problem of selecting the best subset
of these pairings. The remaining optimization problem is then modelled under
the assumption that the set of feasible pairings and their costs are explicitly
available, and can be expressed as a Set Partitioning Problem. The SPP model
is valid for the daily problem as well as the weekly problem and the fully dated
problem.
SPP is the NP-complete problem of partitioning a given set into mutually
independent subsets while minimizing a cost function defined as the sum of the
costs associated to each of the eligible subsets. In the SPP matrix formulation
we are given a m × n matrix A = (aij ) in which all the matrix elements are
either zero or one. Additionally, each column is given a non-negative cost cj . We
say that a column j can cover a row i if aij = 1. Let J denotes the set of the
columns and xj a binary variable which is one if column j is chosen and zero
otherwise. The SPP can be defined formally as follows:
n
M inimize f (x) = cj × xj (1)
j=1
n
Subject to aij × xj = 1; ∀i = 1, . . . , m (2)
j=1
384 B. Crawford, C. Castro, and E. Monfroy
In this formulation, each row represents a flight leg that must be scheduled.
The columns represent pairings. Each pairing is a sequence of flights to be covered
by a single crew over a 2 to 3 day period. It must begin and end in the base city
where the crew resides [22].
where S k is the partial solution of the ant k. The β parameter controls how
important is η in the probabilistic decision [10,17].
pheromone quantity is not a trivial task either. The quantity of pheromone trail
laid on columns is based on the idea: the more pheromone trail on a particular
item, the more profitable that item is [16]. Then, the pheromone deposited in
each component will be in relation to its frequency in the ants solutions. In this
work we divided this frequency by the number of ants obtaining better results.
Heuristic information ηj . In this paper we use a dynamic heuristic in-
formation that depends on the partial solution of an ant. It can be defined as
e
ηj = cjj , where ej is the so called cover value, that is, the number of additional
rows covered when adding column j to the current partial solution, and cj is the
cost of column j. In other words, the heuristic information measures the unit
cost of covering one additional row. An ant ends the solution construction when
all rows are covered.
In this work, we use two instances of ACO: Ant System (AS) and Ant Colony
System (ACS) algorithms, the original and the most famous algorithms in the
ACO family [10]. ACS improves the search of AS using: a different transition
rule in the constructive phase, exploiting the heuristic information in a more
rude form, using a list of candidates to future labelling and using a different
treatment of pheromone. ACS has demonstrated better performance than AS in
a wide range of problems [9]. ACS exploits a pseudo-random transition rule in
the solution construction; ant k chooses the next column j with criteria:
Argmaxl∈S / k τl [ηl ]β if q ≤ qo (4)
x1 + x2 + x3 + x4 = 1 (f light 101)
x6 + x7 + x8 + x9 + x10 + x13 + x15 = 1 (f light 109)
x1 + x2 + x5 + x6 = 1 (f light 203)
x3 + x4 + x7 + x8 + x11 + x12 = 1 (f light 204)
x12 + x13 + x14 + x15 = 1 (f light 211)
x9 + x10 = 1 (f light 212)
x3 + x7 + x9 + x11 = 1 (f light 305)
x1 + x4 + x8 + x10 + x13 = 1 (f light 308)
x5 + x12 + x14 = 1 (f light 310)
x11 + x12 = 1 (f light 402)
x1 + x5 + x13 + x14 = 1 (f light 406)
x2 + x3 + x6 + x7 + x9 + x15 = 1 (f light 407)
xj = 0 or 1; ∀j = 1, . . . , 15
14, then x14 is instantiated with the value 1. For instance, if x14 is instanti-
ated, the consideration of the constraints that contain x14 may have important
consequences:
All the information above, where the only variable uninstantiated after a sim-
ple propagation of constraints was x2 , is ignored by the probabilistic transition
rule of the ants. And in the worst case, in the iterative steps is possible to assign
values to some variable that will make impossible to obtain complete solutions.
The procedure that we showed above is similar to the Constraint Propagation
technique. Constraint Propagation is an efficient inference mechanism based on
the use of the information in the constraints that can be found under differ-
ent names: Constraint Relaxation, Filtering Algorithms, Narrowing Algorithms,
Constraint Inference, Simplification Algorithms, Label Inference, Local Consis-
tency Enforcing, Rules Iteration, Chaotic Iteration. Constraint Propagation em-
beds any reasoning which consists in explicitly forbidding values or combinations
of values for some variables of a problem because a given subset of its constraints
cannot be satisfied otherwise. The algorithm proceeds as follows: when a value
is assigned to a variable, the algorithm recomputes the possible value sets and
assigned values of all its dependent variables (variable that belongs to the same
constraint). This process continues recursively until no more changes can be
done. More specifically, when a variable xm changes its value, the algorithm
evaluates the domain expression of each variable xn dependent on xm . This may
generate a new set of possible values for xn . If this set changes, a constraint is
evaluated selecting one of the possible values as the new assigned value for xn .
It causes the algorithm to recompute the values for further downstream vari-
ables. In the case of binary variables the constraint propagation works very fast
in strongly constrained problems like SPP. The two basic techniques of Con-
straint Programming are Constraint Propagation and Constraint Distribution.
The problem cannot be solved using Constraint Propagation alone, Constraint
Distribution or Search is required to reduce the search space until Constraint
Propagation is able to determine the solution. Constraint Distribution splits a
problem into complementary cases once Constraint Propagation cannot advance
further. By iterating propagation and distribution, propagation will eventually
determine the solutions of a problem [2].
388 B. Crawford, C. Castro, and E. Monfroy
Recently, some efforts have been done in order to integrate Constraint Program-
ming techniques to ACO algorithms [20,11]. A hybridization of ACO and CP
can be approached from two directions: we can either take ACO or CP as the
base algorithm and try to embed the respective other method into it. A form to
integrate CP into ACO is to let it reduce the possible candidates among the not
yet instantiated variables participating in the same constraints that the actual
variable. A different approach would be to embed ACO within CP. The point at
which ACO can interact with CP is during the labelling phase, using ACO to
learn a value ordering that is more likely to produce good solutions.
1 Procedure ACO+CP_for_SPP
2 Begin
3 InitParameters();
4 While (remain iterations) do
5 For k := 1 to nants do
6 While (solution is not completed) and TabuList <> J do
7 Choose next Column j with Transition Rule Probability
8 For each Row i covered by j do /* constraints with j */
9 feasible(i):= Posting(j); /* Constraint Propagation */
10 EndFor
11 If feasible(i) for all i then AddColumnToSolution(j)
12 else Backtracking(j); /* set j uninstantiated */
13 AddColumnToTabuList(j);
14 EndWhile
15 EndFor
16 UpdateOptimum();
17 UpdatePheromone();
18 EndWhile
19 Return best_solution_founded
20 End.
In this work, ACO use CP in the variable selection (when adding columns to
partial solution). The CP algorithm used in this paper is Forward Checking with
Backtracking. The algorithm is a combination of Arc Consistency Technique and
Chronological Backtracking [7]. It performs Arc Consistency between pairs of
a not yet instantiated variable and an instantiated variable, i.e., when a value
is assigned to the current variable, any value in the domain of a future variable
which conflicts with this assignment is removed from the domain. The Forward
Checking procedure, taking into account the constraints network topology (i.e.
wich sets of variables are linked by a constraint and wich are not), guarantees that
at each step of the search, all constraints between already assigned variables and
not yet assigned variables are arc consistent. Then, adding Forward Checking to
ACO for SPP means that columns are chosen if they do not produce any conflict
with the next column to be chosen. In other words, the Forward Checking search
procedure guarantees that at each step of the search, all the constraints between
already assigned variables and not yet assigned variables are arc consistency.
This reduces the search tree and the overall amount of computational work done.
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 389
But it should be noted that in comparison with pure ACO algorithm, Forward
Checking does additional work when each assignment is intended to be added
to the current partial solution. Arc consistency enforcing always increases the
information available on each variable labelling. Figure 1 describes the hybrid
ACO+CP algorithm to solve SPP.
Problem Rows Columns Optimum Density Beasley Levine Kotecha AS ACS AS+FC ACS+FC
sppnw06 50 6774 7810 18.17 7810 - - 9200 9788 8160 8038
sppnw08 24 434 35894 22.39 35894 37078 36068 X X 35894 36682
sppnw09 40 3103 67760 16.20 67760 - - 70462 X 70222 69332
sppnw10 24 853 68271 21.18 68271 X 68271 X X X X
sppnw12 27 626 14118 20.00 14118 15110 14474 15406 16060 14466 14252
sppnw15 31 467 67743 19.55 67743 - - 67755 67746 67743 67743
sppnw19 40 2879 10898 21.88 10898 11060 11944 11678 12350 11060 11858
sppnw23 19 711 12534 24.80 12534 12534 12534 14304 14604 13932 12880
sppnw26 23 771 6796 23.77 6796 6796 6804 6976 6956 6880 6880
sppnw32 19 294 14877 24.29 14877 14877 14877 14877 14886 14877 14877
sppnw34 20 899 10488 28.06 10488 10488 10488 13341 11289 10713 10797
sppnw39 25 677 10080 26.55 10080 10080 10080 11670 10758 11322 10545
sppnw41 17 197 11307 22.10 11307 11307 11307 11307 11307 11307 11307
the SPP solving derives in a lot of unfeasible labelling of variables, and the ants
can not complete solutions. With respect to the computational results this is
not surprising, because ACO metaheuristics are general purpose tools that will
usually be outperformed when customized algorithms for a problem exist.
References
1. D. Alexandrov and Y. Kochetov. Behavior of the ant colony algorithm for the set
covering problem. In Proc. of Symp. Operations Research, pages 255–260. Springer
Verlag, 2000.
2. K. R. Apt. Principles of Constraint Programming. Cambridge University Press,
2003.
3. E. Balas and M. Padberg. Set partitioning: A survey. SIAM Review, 18:710–760,
1976.
4. J. E. Beasley. Or-library:distributing test problem by electronic mail. Journal of
Operational Research Society, 41(11):1069–1072, 1990.
5. J. E. Beasley and P. C. Chu. A genetic algorithm for the set covering problem.
European Journal of Operational Research, 94(2):392–404, 1996.
6. P. C. Chu and J. E. Beasley. Constraint handling in genetic algorithms: the set
partitoning problem. Journal of Heuristics, 4:323–357, 1998.
A Hybrid Ant Algorithm for the Airline Crew Pairing Problem 391