Greedy 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Greedy Algorithms

CLRS 16.1-16.2

Greedy Algorithms
We have previously discussed how to speed up optimization problems using the technique of dynamic programming : The problem must have the optimal substructure property : the optimal solution to the problem contains within it optimal solutions to smaller subproblems. Typically the number of dierent subproblems is polynomial, but the recursive algorithm implementing the recursive formulation of the problem is exponential. This is because of overlapping calls to same subproblem. Idea: If same subproblem is solved several times, use table to store result of a subproblem the rst time it is computed and never compute it again. Alternatively, we can think about lling up a table of subproblem solutions from the bottom. Another, simpler and often less powerful (and less well-dened), technique that uses the same feature of optimal substructure is greediness. Like in the case of dynamic programming, we will introduce greedy algorithms via an example.

1.1

Activity Selection

Problem: Given a set A = {A1 , A2 , , An } of n activities with start and nish times (si , fi ), 1 i n, select maximal set S of non-overlapping activities. One can think of the problem as corresponding to scheduling the maximal number of classes (given their start and nish times) in one classroom. Dynamic programming solution: O(n3 ) read the textbook. Greedy solution: Sort activity by nish time (let A1 , A2 , , An denote sorted sequence) Pick rst activity A1 Remove all activities with start time before nish time of A1 Recursively solve problem on remaining activities.

Program: Sort A by nish time S = {A1 } j=1 FOR i = 2 to n DO IF si sj THEN S = S {Ai } j=i FI OD Example: 11 activities sorted by nish time: (1, 4), (3, 5), (0, 6), (5, 7), (3, 8), (5, 9), (6, 10), (8, 11), (8, 12), (2, 13), (12, 14)
1 3 0 5 3 5 6 8 8 2 12 4 5 6 7 8 9 10 11 12 13 14

4 5

11 12

14

Running time is obviously O(n log n).

Is algorithm correct? Output is set of non-overlapping activities, but is it the largest possible? Proof of correctness: Given activities A = {A1 , A2 , , An } ordered by nish time, there is an optimal solution containing A1 : Suppose S A is optimal solution If A1 S , we are done If A1 / S: Let rst activity in S be Ak Make new solution S \ {Ak } {A1 } by removing Ak and using A1 instead This is valid solution (f1 < fk ) of maximal size (|S \ {Ak } {A1 }| = |S |) S is an optimal solution for A containing A1 S = S \ {A1 } optimal solution for A = {Ai A : sj f1 } (e.g. after choosing A1 the problem reduces to nding optimal solution for activities not overlapping with A1 ) Suppose we have solution S to A such that |S | > |S | = |S | 1 S = S {A1 } would be solution to A Contradiction since we would have |S | > |S | Correctness follows by induction on size of S

Comparison of greedy technique with dynamic programming


Both techniques use optimal substructure (optimal solution contains optimal solution for subproblems within it). In dynamic programming, solution depends on solution to subproblems.That is, compute the optimal solutions for each possible choice and thencompute the optimal way to combine things together. In greedy algorithm we choose what looks like best solution at any given moment and recurse (choice does not depend on solution to subproblems). Note: Shortsightedness: Always go for seemingly next best thing, optimizing the present without regard for the future, and never change past choices. Any problem that can be solved by a greedy algorithm can be solved by dynamic programming, but not the other way around. It is often hard to gure out when being greedy works! How do we know if being greedy works? Try dynamic programming rst and understand the choices. Try to nd out if there is a locally best choice, i.e. a choice that looks better than the others (without computing recursive solutions to subproblems). Now try to prove that it works correctly. Greedy correctness proof: It is enough to prove that there exists an optimal solution which contains the greedy choice. That is, prove that, having made the greedy choice, what remains is a subproblem with the property that if we combine the optimal solution to the subproblem with the greedy choice, we get an optimal solution for the original problem. Typically this is proved by contradiction.

Example: 0 1 knapsack problem: Given n items, with item i being worth $ vi and having weight wi pounds, ll knapsack of capacity w pounds with maximal value. Fractional knapsack problem: As 0 1 knapsack problem but we can take fractions of items. Problems appear very similar, but only fractional knapsack problem can be solved greedily: Compute value per pound
vi wi

for each item

Sort items by value per pound. Fill knapsack greedily (take objects in order) O(n log n) time, easy to show that solution is optimal. Example that 0 1 knapsack problem cannot be solved greedily:

$120 $100 $60


10 20 30

30

= $220
20

20 10

= $160

Items in value per pound order

Optimal solution for knapsack of size 50


2 3

Greedy solution for knapsack of size 50


of $120 object and get $240

Note: In fractional knapsack problem we can take solution.

0 1 knapsack problem can be solved in time O(n w) using dynamic-programming (homework).

You might also like