1 Non Greedy Algorithms (Which We Should Have Cov-Ered Earlier)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

CS 161 Lecture 13 – Greedy Algorithms Jessica Su (some parts copied from CLRS)

1 Non greedy algorithms (which we should have cov-


ered earlier)

1.1 Floyd Warshall algorithm

This algorithm solves the all-pairs shortest paths problem, which is a problem where we
want to find the shortest distance between each pair of vertices in a graph, all at the same
time. Floyd-Warshall runs in O(n3 ) time.

1.1.1 Idea

Label the nodes 1, . . . , n. Consider the intermediate vertices of shortest paths (i.e. the
vertices on a path from u to v that are not equal to u or v).
Define a subproblem (u, v, k), where we consider the shortest path from u to v that only
uses nodes 1 through k as intermediate vertices. (Then for all pairs of vertices (u, v), we are
ultimately interested in the subproblems (u, v, n), which tell us the shortest distance from u
to v that may include any of the vertices as intermediate vertices.)
Either this shortest path p uses node k, or it does not.
If it doesn’t use node k, then all intermediate vertices of path p are in the set {1, 2, . . . , k −1}.
Therefore, the shortest path for subproblem (u, v, k) is the same as the shortest path for
subproblem (u, v, k − 1).
If it does use node k, then the shortest path (that uses the first k vertices as intermediate
vertices) can be decomposed into the shortest path from u to k (that uses the first k − 1
vertices as intermediate vertices) plus the shortest path from k to v (that uses the first k − 1
vertices as intermediate vertices). (If this wasn’t true, then we could replace either of the
subpaths with a shorter path, deriving a contradiction.)

1
CS 161 Lecture 13 – Greedy Algorithms Jessica Su (some parts copied from CLRS)

1.1.2 Recursive formulation


(k)
Let dij be the solution to the subproblem (i, j, k), i.e. the weight of a shortest path from
vertex i to vertex j where all the intermediate vertices are in the set {1, 2, . . . , k}. Then
(
(k) wij if k = 0
dij = (k−1) (k−1) (k−1)
min(dij , dik + dkj ) if k ≥ 1

The base case, k = 0, is just the paths with no intermediate vertices (recall that the vertices
(0)
are numbered starting from 1, and not starting from 0). So dij is just wij , the weight of the
edge from i to j. (If there is no edge from i to j, we say the weight of the edge is infinity.)
(n)
The output of the algorithm is just the matrix of distances δ(i, j) = dij .

1.1.3 Shortest paths

To find the actual shortest paths in the graph (as opposed to just the shortest distances),
(k)
we keep track of the predecessor matrices Π(k) , where πij is the predecessor of vertex j on
a shortest path from vertex i with all intermediate vertices in the set {1, 2, . . . , k}.
(k)
To compute πij , observe that there are once again two cases: the case where k is on the
shortest path from i to j that only includes intermediate vertices in the set {1, . . . , k}, and
the case where k is not on the shortest path.
(k) (k−1)
If k is not on the shortest path, then πij = πij , since the predecessor of j should be the
same as the predecessor for the case where we only use the first k − 1 vertices as intermediate
vertices.
(k) (k−1)
If k is on the shortest path, then πij = πkj , since the predecessor of j from i is the same
as the predecessor of j that we chose on a shortest path from k with all intermediate vertices
in the set {1, 2, . . . , k − 1}.
So we have

2
CS 161 Lecture 13 – Greedy Algorithms Jessica Su (some parts copied from CLRS)

And (
(0) NIL if i = j or wij = ∞
πij =
i if i 6= j and wij < ∞

2 Greedy algorithms
When solving an optimization problem using dynamic programming, you make a “choice”
at each step, and you find the optimal solution given the choice you’ve made. The idea is to
try all possible choices and choose the one that leads to the best optimal solution.
For example, in the rod cutting problem, you are trying to find the maximum amount of
money ri you can get by cutting a rod of length i. So first you want to cut off a piece of
length j (selling it for pj dollars), and then you want to find the maximum amount of money
you can get by cutting the remaining rod of length i − j. In the dynamic programming
solution, we try this for all possible values of j, and find the max over all of the optimal
solutions to find the overall optimal solution, i.e. ri = max1≤j≤i (pj + ri−j ).
To find rn , we just compute r0 , r1 , r2 , etc in sequence until we get to rn .
With greedy algorithms, instead of looking at all the choices and deciding between them, we
focus on one choice: the greedy choice. The greedy choice is the choice that looks best at
any given moment. Once we have made it, we are free to solve a smaller subproblem, which
we can also solve with a greedy choice, etc.
Importantly, there are many cases where greedy algorithms don’t work, even when you feel
like they should. So even when a particular greedy algorithm seems intuitively obvious,
it is important to rigorously prove correctness (more so than in other parts of this class).
This often means arguing that (on each step) if the optimal solution doesn’t use the greedy
choice, we can replace it with a better solution (or an equally good solution) that does use
the greedy choice.

2.1 The activity selection problem

In this problem, you have a list of proposed activities 1, . . . , n. Activity i starts at time si
and finishes at time fi . You want to do as many activities as possible while avoiding time
conflicts.
Formally, we want to find a maximum-size subset of mutually compatible activities. Activi-
ties i and j are compatible if the intervals [si , fi ) and [sj , fj ) do not overlap.

3
CS 161 Lecture 13 – Greedy Algorithms Jessica Su (some parts copied from CLRS)

We assume that the activities are sorted in monotonically increasing order of finish time, i.e.
f1 ≤ f2 ≤ · · · ≤ fn .

Example: In the figure, the set {3, 9, 11} consists of mutually compatible activities. It is
not a maximum subset, since the subset {1, 4, 8, 11} is larger. In fact, this subset is a largest
subset of mutually compatible activities.
Note that there may be multiple optimal solutions – another one is {2, 4, 9, 11}. However,
our algorithm only needs to find one of them.

2.1.1 Idea

Claim: We can find an optimal solution by always selecting the activity with the earliest
finishing time f1 , then recursing on the activities whose start times are ≥ f1 .
(The idea is to always select the activity that leaves the most resources available for other
activities.)
Example: In the table, we start by choosing activity 1, which finishes at time 4. Now we
recurse on activities that start after time 4 (i.e. activities 4, 6, 7, 8, 9, and 11). Then we
choose activity 4, since that is the activity with the earliest finishing time in our remaining
set. Now we recurse on activities 8, 9, and 11. We choose activity 8, which finishes at time
11, and then the only activity left over is activity 11. This produces the set {1, 4, 8, 11}.

2.1.2 Algorithm

We consider the intermediate subproblem Sk , which is the set of activities that start after
activity k finishes. Our goal is to prove that we can find some optimal solution Ak to this
subproblem that includes the greedy choice x as one of the activities. Then at each step of
the algorithm, it will be safe to make the greedy choice, and then recursively consider
1. The activities that finish before x starts (note: this is the empty set, since x is guar-
anteed to be the activity in Sk with the smallest finishing time)
2. The activities that start after x finishes, which means we have to solve the subproblem
Sx . (When we are solving Sx we can once again use the greedy choice, which in this
context, means the activity in Sx with the earliest finishing time.)
Then, we may end up writing the pseudocode as follows:

4
CS 161 Lecture 13 – Greedy Algorithms Jessica Su (some parts copied from CLRS)

2.1.3 Correctness

Theorem: Consider any nonempty subproblem Sk , and let x be an activity in Sk with the
earliest finish time. Then x is included in some maximum-size subset of mutually compatible
activities in Sk .
Proof: Suppose we have an “optimal subset” of activities Ak (note that there may be more
than one optimal subset). Let j be the activity in Ak with the earliest finishing time. If
j = x, then we are done, since we have shown that x is some maximum size subset of
mutually compatible activities of Sk .
If j 6= x, then consider the set of activities A0k = Ak − {j} ∪ {x}, where we swap out the
first activity in the “optimal subset” with the first activity in the entire subproblem. The
activities in A0k are disjoint, which follows because the activities in Ak are disjoint, j is the
first activity in Ak to finish, and fx ≤ fj . Since |A0k | = |Ak |, we conclude that A0k is a
maximum size subset of mutually compatible activities of Sk , and it includes activity x.

2.1.4 Iterative pseudocode

In practice, it is easier to write the code iteratively, as follows:

This algorithm runs in time O(n).

5
CS 161 Lecture 13 – Greedy Algorithms Jessica Su (some parts copied from CLRS)

2.2 Knapsack problem

Now we will consider another application of dynamic programming, the knapsack problem.
This problem is interesting in part because the greedy strategy doesn’t work on one variant
of the problem, but if we change the problem slightly, the greedy strategy does work.

2.2.1 0-1 knapsack problem

Suppose we have n items {1, . . . , n} and we want to decide which ones to take to the pawn
shop. Each item i has a value vi (which represents how much we could sell it for at the
pawn shop), and a weight wi > 0. Our knapsack can only hold a total weight of W , which
constrains which items we can bring.
Question: How can we choose the best set of items to bring?
P
P a set of items S ⊆ {1, . . . , n} that maximizes
Formally: Choose i∈S vi , subject to the
constraint that i∈S wi ≤ W .
Greedy solution: Loop through all the items; on each iteration, choose the item with the
greatest “bang for your buck”, i.e. the greatest vi /wi .
Example where this doesn’t work: Suppose your knapsack has size 50. Item 1 weighs
10 pounds and is worth 60 dollars. Item 2 weighs 20 pounds and is worth 100 dollars. Item
3 weighs 30 pounds and is worth 120 dollars.
The greedy strategy would choose item 1 and then item 2, leaving no room for item 3. This
strategy is worth 160 dollars total. However, the optimal solution chooses items 2 and 3,
which is worth 220 dollars.
Dynamic programming solution: On the nth step of the algorithm, we decide whether
to include the nth item in the knapsack. If we include the nth item, then the problem
reduces to stuffing a subset of the first n − 1 items into a knapsack of size W − wn . If we
don’t include the nth item, the problem reduces to stuffing a subset of the first n − 1 items
into a knapsack of size W .
Let A[i, j] be the maximum amount of money you can get from stuffing the first i items into
a knapsack of size j. (After populating the table, the answer will be stored in A[n, W ].)
Then
(
A[i − 1, j]
A[i, j] = max
A[i − 1, j − wi ] + vi

And we can write the code as follows:


for j = 0 to W:
A[0, j] = 0

6
CS 161 Lecture 13 – Greedy Algorithms Jessica Su (some parts copied from CLRS)

for i = 1 to n:
for j = 0 to W:
if w[i] > j:
A[i, j] = A[i - 1, j]
else:
A[i, j] = max(A[i - 1, j], A[i - 1, j - w[i]] + v[i])
Runtime: O(nW )

2.2.2 Fractional knapsack problem

In the fractional knapsack problem, you can take fractions of items, instead of either deciding
to bring the item or not bring the item. In this case, the greedy strategy does apply. First
we compute the value per pound vi /wi for each item. Then we take as much as possible of
the item with the greatest value per pound. Once the supply of that item is exhausted, we
move on to the item with the second-greatest value per pound, etc.
Correctness: Assume the items are sorted in order of decreasing vi /wi . Denote a solution
to this problem by s = (s1 , s2 , . . . , sn ), where si is the total amount of the ith item that
we choose to carry. The greedy algorithm works by assigning s1 = min(w1 , W ), and then
recursively solving the subproblem with the last n − 1 items, and a knapsack of weight
W − w1 .
To show this strategy gives an optimal solution, we proceed by contradiction. Suppose
the optimal solution is s1 , s2 , . . . , sn , where s1 < min(w1 , W ). (Note that s1 cannot be
greater than min(w1 , W ) because the total amount of item 1 is only w1 , so we are really
just comparing to the greedy case, where s1 = min(w1 , W ).) Let i be the largest number
such that si > 0. By decreasing si to max(0, si − (min(w1 , W ) − s1 )) and increasing s1 by
the same amount, we get a better solution, which contradicts the optimality of the existing
solution. Therefore, the greedy solution is the optimal solution.

You might also like