Voorbeeldexamen Oplossingen

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Name: Student nr:

Question 1. (3 points)
Let N = {1, . . . , n} be a set of customers. A repairman intends to visit all these customers
in a single tour that starts and ends at customer 1. In doing so, the objective is to minimize
the average visiting time of a customer. For every pair of nodes i, j ∈ N , denote by ℓij ≥ 0
the time it takes to the repairman to travel from i to j. Travel times are symmetric, i.e.,
ℓij = ℓji for every pair i, j ∈ N , and satisfy the triangle inequality, namely, ℓij ≤ ℓik + ℓkj for
every triple of distinct i, j, k ∈ N . You can refer to Question 2 for an example instance.
(a) For every pair i, j ∈ N , introduce a decision variable xij that equals one if the repairman
travels directly from customer i to j, and that equals 0 otherwise. Additionally, for every i ∈
N , let ti be the time at which the repairman visits customer i.
Correct the following model such that it provides a mixed integer linear programming for-
mulation for the problem.

X
max ti
i∈N
X X
s.t. xij = xji j∈N
i∈N : i̸=j i∈N : i̸=j
X
x1i = 1
i∈N : i̸=1

t1 = 0
tj ≥ (ti + ℓij )xij i, j ∈ N : i ̸= j
xij ∈ {0, 1} i, j ∈ N : i ̸= j
ti ≥ 0 i∈N

Solution:
X
min ti
i∈N
X X
s.t. xij = xji = 1 j∈N
i∈N : i̸=j i∈N : i̸=j

t1 = 0
tj ≥ ti + ℓij xij − M (1 − xij ) i, j ∈ N : i ̸= j and j ̸= 1
xij ∈ {0, 1} i, j ∈ N : i ̸= j
ti ≥ 0 i∈N
P
Where M is a sufficiently large positive number, for instance M = i,j∈N ℓij . Note
that tj ≥ (ti + ℓij )xij does enforce what we want to achieve, but is not linear since the
right-hand side multiplies two decision variables.

Page 1 of 14
Name: Student nr:

(b) Suppose that for every customer i ∈ N , there is a given target time t̂i ≥ 0 at which you
aim to visit the respective customer. Specify an additional constraint to enforce for every
customer i ∈ N that the actual visiting time ti does not deviate from the target time t̂i by
more than u ≥ 0 time units.

Solution:
t̂i − u ≤ ti ≤ t̂i + u i ∈ N \ {1}

(c) Adjust the model such that, instead of minimizing the average visiting time, the ob-
jective minimizes the total deviation from the target times t̂i introduced in the previous
sub-question. Clearly define any additional decision variables that you may introduce.

Solution:
Introduce for every i ∈ N the decision variable δi that equals the deviation from the
target time t̂i . The objective function then becomes:
X
min δi
i∈N

and we introduce the following sets of constraints:

δi ≥ ti − t̂i i∈N
δi ≥ t̂i − ti i∈N
δi ≥ 0 i∈N

Page 2 of 14
Name: Student nr:

Question 2. (4 points)
Consider the following instance for the problem described in Question 1, where n = 6. For
the current question, you can ignore the modifications to the original problem that were
added at Questions 1(b)-(c).

1 2 3 4 5 6
1 0 1 2 2 3 4
2 0 1 1 2 3
3 0 2 1 2
4 0 1 2
5 0 1
6 0

One feasible tour would then be (1, 6, 2, 5, 4, 3, 1), with objective value

t1 =0
t6 =0+4=4
t2 =4+3=7
t5 =7+2=9
t4 = 9 + 1 = 10
t3 = 10 + 2 = 12
P6
i=1 ti = 42

(a) Propose a move for the problem of Question 1 that can be used in a local search algorithm
and that always produces a feasible solution. Describe the move in general terms (i.e., for
an arbitrary instance) and make sure to explain why this solution is always feasible.

Solution:
One possible move would be the pairwise interchange of two customers i, j ∈ N \ {1}
with i ̸= j in the tour. Since this results in a tour that starts and ends with customer 1,
this move always produces a feasible solution.

Page 3 of 14
Name: Student nr:

(b) Apply the move to the solution (1, 6, 2, 5, 4, 3, 1) in the example instance above.

Solution:
Applied to the example above, a possible move could be to interchange customers 6
and 4, which leads to the tour (1, 4, 2, 5, 6, 3, 1) an objective value
t1 =0
t4 =0+2=2
t2 =2+1=3
t5 =3+2=5
t6 =5+1=6
t3 =6+2=8
P6
i=1 ti = 24

(c) Starting from the feasible solution (1, 6, 2, 5, 4, 3, 1) described above, illustrate how a
simulated annealing algorithm that uses your move would proceed for this problem instance
for three iterations. Use a starting temperature of T = 5 and decrease the temperature with
one unit in every iteration. In each iteration, you are free to choose any neighbor that is
reachable by a single move from the current solution at the start of that iteration, so you
do not need to use random numbers for these decisions. For all other random decisions, use
the following sequence of random numbers:

0.12, 0.34, 0.77, 0.43, 0.92, 0.26.

You can choose additional random numbers freely in case you need more. The table below
computes e−δ/T for all values of T ∈ {5, 4, 3}, indicated by the rows, and for all values of
δ ∈ {1, . . . , 12}, indicated by the columns. For T = 4 and δ = 3, for instance, we have
e−δ/T = e−3/4 = 0.47. Observe that e−δ/T < 0.10 for all values of T ∈ {5, 4, 3} and δ ≥ 12.
T \δ 1 2 3 4 5 6 7 8 9 10 11 12
5 0.82 0.67 0.55 0.45 0.37 0.3 0.25 0.2 0.17 0.14 0.11 0.09
4 0.78 0.61 0.47 0.37 0.29 0.22 0.17 0.14 0.11 0.08 0.06 0.05
3 0.72 0.51 0.37 0.26 0.19 0.14 0.1 0.07 0.05 0.04 0.03 0.02

Page 4 of 14
Name: Student nr:

Solution:

it T best current random accept with random numb


solution solution neighbor probability & decision
1 5 (1, 6, 2, 5, 4, 3, 1) (1, 6, 2, 5, 4, 3, 1) (1, 4, 2, 5, 6, 3, 1) accept w.p. 1 −
42 42 24 (improvement) accept
2 4 (1, 4, 2, 5, 6, 3, 1) (1, 4, 2, 5, 6, 3, 1) (1, 4, 2, 6, 5, 3, 1) accept with 0.12
24 24 26 prob e−2/4 = 0.61 accept
3 3 (1, 4, 2, 5, 6, 3, 1) (1, 4, 2, 6, 5, 3, 1) (1, 4, 2, 6, 3, 5, 1) accept with 0.34
24 26 28 prob e−2/3 = 0.51 accept

In iteration 1, we swap customers 6 and 4 as in the previous sub-question. In iteration 2,


we swap customers 5 and 6, leading to tour (1, 4, 2, 6, 5, 3, 1) with objective value
t1 =0
t4 =0+2=2
t2 =2+1=3
t6 =3+3=6
t5 =6+1=7
t3 =7+1=8
P6
i=1 ti = 26
In iteration 3, we swap customers 3 and 5, leading to tour (1, 4, 2, 6, 3, 5, 1) with objective
value
t1 =0
t4 =0+2=2
t2 =2+1=3
t6 =3+3=6
t3 =6+2=8
t5 =8+1=9
P6
i=1 ti = 28

Page 5 of 14
Name: Student nr:

Question 3. (3 points)
Suppose that you are working for a company that receives a set of orders every day and
can choose which ones to fulfill. Today there are n available orders and if you decide to
select/fulfill order i ∈ {1, . . . , n}, then this requires a processing time of ti hours and brings
a profit of pi euros. Each selected order needs to be processed by exactly one employee. The
company has m employees and each employee can work up to T ≥ maxi∈{1,...,n} ti hours. The
aim is to decide on the set of orders to fulfill and to assign them to employees in such a way
that the working hours of employees are respected and the total profit is maximized. This
problem can be modeled as
n X
X m
max pi xij
i=1 j=1
m
X
s.t. xij ≤ 1 i = 1, . . . , n (1)
j=1
n
X
ti xij ≤ T j = 1, . . . , m
i=1
xij ∈ {0, 1} i = 1, . . . , n, j = 1, . . . , m

where xij is 1 if order i is assigned to employee j and 0 otherwise.


(a) Relax constraints (1) in a Lagrangian manner. Write down your relaxed problem.

Solution:
n X
X m n
X m
X 
max pi xij + λi 1 − xij
i=1 j=1 i=1 j=1
n
X
s.t. ti xij ≤ T j = 1, . . . , m
i=1
xij ∈ {0, 1} i = 1, . . . , n, j = 1, . . . , m

Page 6 of 14
Name: Student nr:

(b) For which values of the Lagrange multipliers is this a relaxation?

Solution:
λi ≥ 0, i = 1, . . . , n

(c) Do you recognize the relaxed problem? Does it decompose? Is it easy to solve? Explain.

Solution:
The relaxed problem decomposes into a separate 0/1 knapsack problem for every j ∈
{1, . . . , m}. In particular, for each j ∈ {1, . . . , m}, if we define the problem
n
X
zj (λ) = max (pi − λi )xij
i=1
n
X
s.t. ti xij ≤ T
i=1
xij ∈ {0, 1} i = 1, . . . , n

then we have that n m


X X

z (λ) = λi + zj (λ)
i=1 j=1

Hence, we can solve the relaxed problem by solving a separate 0/1 knapsack problem for
every j ∈ {1, . . . , m}.

Page 7 of 14
Name: Student nr:

Question 4. (4 points)
Below you can find a formulation for the problem of Question 3 that is suited for column
generation.
X
max pS λS
S∈Ω
X
s.t. λS ≤ 1 i = 1, . . . , n (2)
S∈Ω:i∈S
X
λS ≤ m (3)
S∈Ω
λS ∈ {0, 1} S∈Ω

(a) What is the interpretation of the decision variables λS in this formulation?

Solution:
λS takes value 1 if the orders in set S ⊆ {1, . . . , n} are assigned to one employee and 0
otherwise.

(b) How should the parameters Ω and pS be defined for the formulation to be correct?

Solution:
P
Ω = the set of all subsets S ⊆ {1, . . . , n} such that i∈S ti ≤ T
P
pS = i∈S pi be the total profit of the orders in set S ∈ Ω.

Page 8 of 14
Name: Student nr:

(c) Associate with every constraint i ∈ {1, . . . , n} of type (2) in the column generation
formulation a dual price θi ≥ 0, and for constraint
P (3) a dual price β ≥ 0. The reduced cost
of a primal decision variable λS then equals i∈S θi + β − pS .
Suppose that you apply column generation on an instance with n = 6, m = 2, T = 8, and
profits pi and processing times pi as given in the table below.
i 1 2 3 4 5 6
pi 12 18 8 11 4 13
ti 4 6 2 4 2 4
θi 0 12 8 0 0 13
In a given iteration of the column generation procedure, you obtain dual prices θi as given
by the final row in the above table and β = 13. Is there any column to add to the restricted
master LP? Which one and why?

Solution:
We want to addP a column λS to the restricted master if and only if it has a negative
reduced cost i∈S θi + β − pS . In the instance above, an example of a column to add
is λS for S = {1, 4}, since it has a negative reduced cost equal to

θ1 + θ4 + β − p1 − p4 = 0 + 0 + 13 − 12 − 11 = −10

and {1, 4} ∈ Ω since t1 + t4 = 4 + 4 ≤ 8 = T .

The answer above suffices to get full score on this question. Some additional remarks:
• The set S = {1, 4} is just one example, other example columns with negative reduced
cost are {1, 5} and {4, 5}.
• For a better understanding, it might be useful to interpret the reduced cost as follows.
If an employee fulfills a set of orders S ∈ Ω, then there remains one employee less
available in the remaining optimization, and the jobs i ∈ S cannot be fulfilled be any
other employee. The dual price β reflects the decrease in the objective function when
we have one employee less at our disposal. The dual price θi reflects the decrease in
the objective function when we can no longer fulfill order i. Hence, the P reduced cost
reflects that it is beneficial to use decision variable λS if the profit pS = P i∈S pi from
fulfilling the orders i ∈ S outweighs the decrease in objective function i∈S θi + β.

Page 9 of 14
Name: Student nr:

(d) Describe for both the original formulation from Question 3 and the column generation
formulation from the current question how to incorporate the additional requirement that
some orders should not be fulfilled by the same employee. More specifically, for every pair of
orders i, k ∈ {1, . . . , n} there is a binary indicator aik that equals one if and only if orders i
and k are allowed to be fulfilled by the same employee.

Solution:
For the original formulation, introduce for every the additional constraint:

xij + xkj ≤ 1 j = 1, . . . , m and i, k = 1, . . . , n with aik = 0.

For the column


P generation formulation, redefine Ω as the the set of all subsets S such
that both i∈S ti ≤ T and aik = 1 for all i, k ∈ S.

Page 10 of 14
Name: Student nr:

Question 5. (4 points)
We are given an undirected graph G = (V, E) with vertices V and edges E. A matching is
defined as a subset of the edges M ⊆ E such that every vertex v ∈ V is incident to at most
one edge in M . We say that a matching M is maximal when it cannot be enlarged. That is,
there exists no edge e ∈ E \ M such that e is not adjacent to any edge in M . We say that a
matching M is maximum when no matching of larger cardinality exist. For example, in the
graph below,

• the set of edges {{1, 2}, {2, 3}} is no matching because vertex 2 is incident to both
edges;
• the set of edges {{1, 2}, {3, 5}} is a matching, but not maximal because it can be
enlarged by adding edge {4, 6};
• the set of edges {{1, 4}, {3, 5}} is a maximal matching, but no maximum matching;
• the set of edges {{1, 2}, {3, 5}, {4, 6}} is a maximum matching.

1 3

4 5

We are interested in the problem of finding a maximum matching. Consider the following
greedy heuristic for this problem:

1. Initialize M ← ∅.
2. While there exists an edge e ∈ E \ M that is not adjacent to any edge in M , do:
• Arbitrarily pick such an edge e ∈ E \ M that is not adjacent to any edge in M and
include it in the matching; i.e., let M ← M ∪ {e}.
3. Return the matching M .

Page 11 of 14
Name: Student nr:

(a) Argue that the greedy heuristic always outputs a maximal matching.

Solution:
By definition, a matching M is maximal when there exists no edge e ∈ E \ M that
is not adjacent to any edge in M . Hence, the while loop of the greedy algorithm only
terminates if M is maximal. On the other hand, the while loop terminates after at most
|E| iterations because there are only |E| edges and exactly one edge is included in every
iteration. This shows that the greedy algorithm always outputs a maximal matching.

(b) Provide an example instance where the greedy algorithm could output a matching that
is not maximum.

Solution:
For the instance described by the graph below, the greedy algorithm could output the
matching M = {{2, 3}} by selecting the edge {2, 3} at the first iteration of the while-loop.
The unique maximum matching for this graph, however, is M ⋆ = {{1, 2}, {3, 4}}.

1 2 3 4

Page 12 of 14
Name: Student nr:

(c) Show that the greedy heuristic has a performance guarantee of factor two. That is,
given an arbitrary undirected graph G = (V, E), let M be the matching obtained by the
greedy algorithm and let M ⋆ be a maximum matching. You need to show that |M ⋆ | ≤ 2|M |.
(Hint: first denote by V (M ) the set of vertices incident to the edges of M and observe that
|V (M )| = 2|M |. Next, argue that every edge e ∈ M ⋆ is incident to at least one vertex in
V (M ), and that every vertex in V (M ) is incident to at most one edge in M ⋆ .)

Solution:
Observe that every edge e = {u, v} ∈ M is incident to exactly two vertices (i.e., its
endpoints u and v). Moreover, no vertex v ∈ V (M ) is incident to more than one edge
in M since M is a matching. We therefore have that

|V (M )| = 2|M |. (4)

Next, observe that, since M is maximal, every edge e ∈ M ⋆ is incident to at least one
vertex in V (M ), because otherwise M could be enlarged by including e. Moreover,
since M ⋆ is a matching, every vertex v ∈ V (M ) is incident to at most one edge in M ⋆ .
Therefore,
|M ⋆ | ≤ |V (M )|. (5)
Combining Equations (4) and (5), we obtain that 2|M | = |V (M )| ≥ |M ⋆ |, which is what
we needed to prove.

Page 13 of 14
Name: Student nr:

Question 6. (2 points)
True or false. Please provide a brief explanation to each answer. There is no subtraction of
points for a wrong answer, but just answering ‘true’ or ‘false’ without an explanation will
yield no points.
(a) The vehicle routing problem is a relaxation of the traveling salesperson problem.

Solution:
False, the vehicle routing problem generalizes the traveling salesperson problem. For
instance, due to the capacity constrains, not every feasible TSP tour is also feasible for
VRP.

(b) The assignment problem is a relaxation of the traveling salesperson problem.

Solution:
True, relaxing the subtour elimination constraints in the DFJ formulation for TSP results
in a formulation for the assignment problem.

(c) One advantage of a Lagrangian relaxation based heuristic is that it yields both a primal
and dual bound.

Solution:
True, the relaxation provides a dual bound (i.e., lower bound for a minimization problem
and an upper bound for a maximization problem), whereas the heuristic provides a
feasible solution and, as such, a primal bound.

(d) In simulated annealing it is crucial to increase the temperature after each iteration in
which a move improves the currently best found solution.

Solution:
False, in simulated annealing the temperature is decreased after each iteration.

Page 14 of 14

You might also like