Design Analysis and Algorithm

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 83

UNIT 2

Algorithms Analysis
Techniques & Design
Technique
Efficiency of Algorithms,

Analysis of Recursive Programs,


Solving Recurrence Equation
Divide & Conquer Algorithms
Dynamic programming
Greedy Algorithm
Backtracking

Zope Chaitali
Krushna
Roll no: 15

Efficiency of Algorithms,
Definition
Fundamental techniques which are used to design an
algorithm efficiently:
1. Divide-and-Conquer
2. Greedy method
3. Dynamic Programming
4. Backtracking
5. Branch-and-Bound

Advanced MIS

Analysis of Recursive Programs


Definition
Recurrence relations often arise in calculating the
time and space complexity of algorithms.
Any problem can be solved either by writing
recursive algorithm or by writing non-recursive
algorithm.

A recursive algorithm is one which makes a


recursive call to itself with smaller inputs.
Advanced MIS

We often use a recurrence relation to describe the


running time of a recursive algorithm.

A recurrence relation is an equation or


inequality that describes a function in terms of its
value on smaller inputs or as a function of
preceding (or lower) terms.

Advanced MIS

METHODS FOR SOLVING RECURRENCE


RELATIONS
Definition

We will introduce three methods of solving the


recurrence equation:
1. The Substitution Method
2. Iteration Method
3. The Recursion-tree method
4. Master method

Advanced MIS

In substitution method, we guess a bound and


then use mathematical induction to prove our
guess correct.
The iteration method converts the recurrence into
a summation and then relies on techniques for
bounding summations to solve the recurrence.
The Master method provides bounds for the
recurrence of the form

Advanced MIS

Divide & conquer technique


Definition
Divide

&

conquer

technique

is

top-down

approach to solve a problem.


The algorithm which follows divide and conquer
technique involves 3 steps:
1. Divide the original problem into a set of sub
problems.
2. Conquer

(or

Solve)

every

sub-problem

individually, recursive.
3. Combine the solutions of these sub problems to
get the solution of original problem.

Advanced MIS

Binary search
e.g. 2 4 5 6 7 8 9

search 7: needs 3 comparisons


time: O(log n)
The binary search can be used only if the
elements are sorted and stored in an array.

Algorithm binary-search
Input: A sorted sequence of n elements stored in an array
Output: The position of x (to be searched).
Step 1: If only one element remains in the array, solve it
directly.
Step 2: Compare x with the middle element of the array.
Step 2.1: If x = middle element, then output it and stop.
Step 2.2: If x < middle element, then recursively solve the
problem with x and the left half array.
Step 2.3: If x > middle element, then recursively solve the
problem with x and the right half array.

Algorithm BinSearch(a, low, high, x)


// a[]: sorted sequence in nondecreasing order
// low, high: the bounds for searching in a []
// x: the element to be searched
// If x = a[j], for some j, then return j else return 1
if (low > high) then return 1
// invalid range
if (low = high) then
// if small P
if (x == a[i]) then return i
else return -1
else
// divide P into two smaller subproblems
mid = (low + high) / 2
if (x == a[mid]) then return mid
else if (x < a[mid]) then
return BinSearch(a, low, mid-1, x)
else return BinSearch(a, mid+1, high, x)

Two-way merge
Merge two sorted sequences into a single
one.
[25

37

48

[12

25

33

57][12 33
merge
37 48 57

86

92]

86

92]

time complexity: O(m+n),


m and n: lengths of the two sorted lists

4 -11

Merge sort
Sort into nondecreasing order
[25][57][48][37][12][92][86][33]
pass 1
[25 57][37 48][12 92][33 86]
pass 2
[25 37 48 57][12 33 86 92]
pass 3
[12 25 33 37 48 57 86 92]

log2n passes are required.


time complexity: O(nlogn)
4 -12

Algorithm Merge-Sort
Input: A set S of n elements.
Output: The sorted sequence of the inputs in
nondecreasing order.
Step 1: If |S|2, solve it directly.
Step 2: Recursively apply this algorithm to solve the left
half part and right half part of S, and the results are
stored in S1 and S2, respectively.
Step 3: Perform the two-way merge scheme on S1 and S2.

Strassens matrix multiplicaiton


P = (A11 + A22)(B11 + B22)
Q = (A21 + A22)B11
R = A11(B12 - B22)
S = A22(B21 - B11)
T = (A11 + A12)B22
U = (A21 - A11)(B11 + B12)
V = (A12 - A22)(B21 + B22).
C11 = P + S - T + V
C12 = R + T
C21 = Q + S
C22 = P + R - Q + U

C11
B21
C12
B22
C21
B21
C22
B22

= A11 B11 + A12


= A11B12 + A12
= A21 B11 + A22
= A21 B12 + A22

Time complexity
7 multiplications and
subtractions

b
Time complexity:
,n2

T(n) =

7T(n/2)+an2 , n > 2

18

additions

T ( n ) an 2 7T ( n / 2)
an 2 7( a ( n2 ) 2 7T ( n / 4)
an 2 74 an 2 7 2 T ( n / 4)

an 2 (1

7
4

( 74 ) 2 ( 74 ) k 1 ) 7 k T (1)

cn 2 ( 74 ) log2 n 7log2 n , c is a constant


cn 2 ( 74 )log2 n n log2 7 cn log2 4log2 7log2 4 n log2 7
O( n log2 7 ) O( n 2.81 )

or

Maximum numbers
finding the maximum of a set S of n
numbers

4 -16

Greedy Algorithm
Includes

A greedy algorithm always makes the choice that

looks best at the moment


My everyday examples:
Walking to the Corner
Playing a bridge hand
The hope: a locally optimal choice will lead to a

globally optimal solution


For some problems, it works
[email protected]

A Generic Greedy Algorithm:


(1) Initialize C to be the set of candidate solutions
(2)
Initialize a set S = the empty set (the set is to be
the
optimal solution we are constructing).
(3) While C
and S is (still) not a solution do
(3.1) select x from
set C using a greedy strategy
(3.2) delete x from C
(3.3) if {x} S is a feasible solution, then
S = S {x} (i.e., add x to set S)
(4)
if S is a solution then
return S
(5) else return failure
In general, a greedy algorithm is efficient because it makes a
sequence of (local) decisions and never backtracks. The
solution is not always optimal, however.

Definition
A Minimum Spanning Tree (MST) is a
subgraph of an undirected graph
such that the subgraph spans
(includes) all nodes, is connected, is
acyclic, and has minimum total edge
weight

Algorithm Characteristics
Both Prims and Kruskals Algorithms
work with undirected graphs
Both work with weighted and
unweighted graphs but are more
interesting when edges are weighted
Both are greedy algorithms that
produce optimal solutions

Prims Algorithm
Similar to Dijkstras Algorithm except
that dv records edge weights, not
path lengths

Walk-Through
2

10

C
3
18

10
25

2
3

Initialize
array

dv

pv

10

C
3
18

10
25

2
3

Start with any node,


say D

dv

pv

A
B
C
D
E
F
G
H

10

3
18

10
25

2
3

Update distances of
adjacent, unselected
nodes

dv

pv

25

18

A
B
C
D

10

H
3

K
3

18

10
25

2
7

Select node with


minimum distance

dv

pv

25

18

A
B
C
D

G
H

10

3
18

10
25

2
3

Update distances of
adjacent, unselected
nodes

dv

pv

18

A
B
C
D

G
H

10

H
3

C
3
18

10
25

2
7

Select node with


minimum distance

dv

pv

18

A
B

G
H

10

3
18

10
25

2
3

Update distances of
adjacent, unselected
nodes

dv

pv

A
B
C

G
H

10

H
3

K
3

18

10
25

2
7

Select node with


minimum distance

dv

pv

A
B
C

E
F

10

3
18

10
25

2
3

Update distances of
adjacent, unselected
nodes

dv

pv

10

E
F

10

H
3

K
3

18

10
25

2
7

Select node with


minimum distance

dv

pv

10

10

3
18

10
25

2
3

Update distances of
adjacent, unselected
nodes

dv

pv

10

TableH entries 3
unchanged

10

H
3

K
3

18

10
25

2
7

Select node with


minimum distance

dv

pv

10

10

3
18

10
25

2
3

Update distances of
adjacent, unselected
nodes

dv

pv

10

H
3

C
3
18

10
25

2
7

Select node with


minimum distance

dv

pv

B
C

10

3
18

10
25

2
3

Update distances of
adjacent, unselected
nodes
A

dv

pv

B
C

Table entries
unchanged

10

H
3

C
3
18

10
25

2
7

Select node with


minimum distance

dv

pv

2
3

F
A
4

2
3

21

Cost of Minimum
Spanning Tree = dv =
K

dv

pv

Done

Kruskals Algorithm
Work with edges, rather than nodes
Two steps:
Sort edges by increasing edge weight
Select the first |V| 1 edges that do not
generate a cycle

Walk-Through
F

10

C
3
6

4
1

2
3

Consider an undirected, weight


graph

10

C
3
6

4
1

2
3

Sort the edges by increasing edge


weight

edge

dv

edge

dv

(D,E)

(B,E)

(D,G)

(B,F)

(E,G)

(B,H)

(C,D)

(A,H)

(G,H)

(D,F)

(C,F)

(A,B)

(B,C)

(A,F)

10

Select first |V|1 edges which do


not generate a cycle

10

3
6

edge

dv

(D,E)

(D,G)

(E,G)

(C,D)

(G,H)

(C,F)

(B,C)

2
3

edge

dv

(B,E)

(B,F)

(B,H)

(A,H)

(D,F)

(A,B)

(A,F)

10

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(D,E)

(D,G)

(E,G)

(C,D)
(G,H)
(C,F)
(B,C)

3
3

edge

dv

(B,E)

(B,F)

(B,H)

(A,H)

(D,F)

(A,B)

(A,F)

10

3
4

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(E,G)

(B,F)

(C,D)

(B,H)

(G,H)

(A,H)

(C,F)

(D,F)

edge

dv

(D,E)

(D,G)

(A,B) create
8
Accepting
(E,G) would
a
(B,C) edge
4
cycle
(A,F)

10

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(E,G)

(B,F)

(C,D)

(B,H)

(G,H)

(A,H)

(C,F)

(D,F)

(B,C)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(E,G)

(B,F)

(C,D)

(B,H)

(G,H)

(A,H)

(C,F)

(D,F)

(B,C)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(E,G)

(B,F)

(C,D)

(B,H)

(G,H)

(A,H)

(C,F)

(B,C)

(D,F)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(E,G)

(B,F)

(C,D)

(B,H)

(G,H)

(A,H)

(C,F)

(B,C)

(D,F)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(B,F)

(E,G)

(C,D)

(B,H)

(G,H)

(A,H)

(C,F)

(D,F)

(B,C)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(B,F)

(E,G)

(B,H)

(C,D)

(G,H)

(A,H)

(C,F)

(D,F)

(B,C)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(B,F)

(E,G)

(B,H)

(C,D)

(A,H)

(G,H)

(C,F)

(D,F)

(B,C)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle

10

3
6

4
1

2
3

edge

dv

(B,E)

(B,F)

(E,G)

(B,H)

(C,D)

(A,H)

(G,H)

(D,F)

(C,F)

(B,C)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Select first |V|1 edges which do


not generate a cycle
3

F
A

C
3

2
3

D
1

edge

dv

(B,E)

(B,F)

(E,G)

(B,H)

(C,D)

(A,H)

(G,H)

(D,F)

(C,F)

(B,C)

(A,B)

(A,F)

10

edge

dv

(D,E)

(D,G)

Done

Total Cost =
21

dv =

not
consider
ed

Knapsack problem
Includes

Given a knapsack with maximum capacity W, and


a set S consisting of n items
Each item i has some weight wi and benefit value
bi (all wi and W are integer values)
Problem: How to pack the knapsack to achieve
maximum total value of packed items?

Advanced MIS

0-1 Knapsack problem


Problem, in other words, is to find

max bi subject to wi W
iT

iT

The problem is called a 0-1 problem,


because each item must be entirely accepted
or rejected.

Dynamic Programming
Dynamic Programming is an algorithm design technique for
optimization problems: often minimizing or maximizing.
Like divide and conquer, DP solves problems by combining
solutions to subproblems.
Unlike divide and conquer, subproblems are not
independent.
Subproblems may share subsubproblems,
However, solution to one subproblem may not affect the
solutions to other subproblems of the same problem. (More on
this later.)

DP reduces computation by
Solving subproblems in a bottom-up fashion.
Storing solution to a subproblem the first time it is solved.
Looking up the solution when subproblem is encountered again.

Key: determine structure of optimal solutions

Steps in Dynamic Programming


1. Characterize structure of an optimal solution.
2. Define value of optimal solution recursively.
3. Compute optimal solution values either topdown with caching or bottom-up in a table.
4. Construct an optimal solution from computed
values.

Comp 122, Spring 2004

Floyds Algorithm: All pairs shortest paths


Problem:
between

In a weighted (di)graph, find shortest paths


every pair of vertices

Same idea: construct solution through series of


matrices D(0), ,
D (n) using increasing subsets of the
3
vertices allowed 4
0 4
1 intermediate
as
Example:

6
1

1 0 4 3
0
6 5 1 0

5
2

Floyds Algorithm (matrix generation)


On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among 1,
,k as intermediate
D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]}

D(k-1)[i,k]

i
D(k-1)[k,j]
D(k-1)[i,j]
j

Initial
condition?

Floyds Algorithm (example)


2

1
3

D(2)

2
7

0
7

1
0

D(1)

0
2

0
7

3
5
0
9

1
0

D(0)

0
2

0
2
9
6

0
7

3
5
0
9

1
0

D(3)

0
2
9
6

10
0
7
16

3
5
0
9

4
6
1
0

D(4)

0
2
7
6

10
0
7
16

3
5
0
9

4
6
1
0

Warshalls Algorithm: Transitive Closure


Computes the transitive closure of a relation
Alternatively: existence of all nontrivial paths in a
digraph
Example of transitive closure:
3

0
1
0
0

0
0
0
1

1
0
0
0

0
1
0
0

0
1
0
1

0
1
0
1

1
1
0
1

0
1
0
1

Warshalls Algorithm
Constructs transitive closure T as the last matrix in the sequence of nby-n matrices R(0), , R(k), , R(n) where
R(k)[i,j] = 1 iff there is nontrivial path from i to j with only the first
k vertices allowed as intermediate
Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)
3

0
1
0
0

1
4

2
R(0)
0 1
0 0
0 0
1 0

0
1
0
0

1
4

1
4

0
1
0
1

R(3)
0 1
0 1
0 0
1 1

0
1
0
1

Backtracking
Suppose you have to make a series of
decisions, among various choices, where
You dont have enough information to know
what to choose
Each decision leads to a new set of choices
Some sequence of choices (possibly more
than one) may be a solution to your
problem
Backtracking is a methodical way of trying out

N-Queens Problem
Try to place N queens on an N * N
board such that none of the queens
can attack another queen.
Remember that queens can move
horizontally, vertically, or diagonally
any distance.
Lets consider the 8 queen
example

The 8-Queens Example

0
0
1
2
3
4
5
6
7

A t ( 4 ,4 ) a t t a c k f r o m ( 0 ,0 )

A t ( 5 ,4 ) a t t a c k f r o m ( 2 , 1 )
0
0
1
2
3
4
5
6
7

A t ( 6 , 4 ) a t t a c k f r o m ( 4 ,2 )
4

A t ( 7 ,4 ) s u c c e s s

Terminology I
A tree is composed of nodes

There are three kinds of


nodes:
The (one) root node
Internal nodes
Leaf nodes

Backtracking can be thought of


as searching a tree for a
particular goal leaf node

Terminology II
Each non-leaf node in a tree is a parent of
one or more other nodes (its children)
Each node in the tree, other than the root,
has exactly one parent
parent
Usually, however,
we draw our trees
downward, with
the root at the top

parent
children

children

Sum-of-Subsets problem

Recall the thief and the 0-1 Knapsack problem.


The goal is to maximize the total value of the
stolen items while not making the total weight
exceed W.
If we sort the weights in nondecreasing order
before doing the search, there is an obvious sign
telling us that a node is nonpromising.

Sum-of-Subsets problem

Let total be the total weight of the remaining


weights, a node at the ith level is nonpromising
if
weight + total > W

Example
Say that our weight values are 5, 3,
2, 4, 1
W is 8
We could have
5+3
5+2+1
4+3+1

We want to find a sequence of values


that satisfies the criteria of adding up
to W

Hamiltonian Circuits
Problem
Hamiltonian circuit (tour) of a graph
is a path that starts at a given
vertex, visits each vertex in the
graph exactly once, and ends at the
starting vertex.

Branch & Bound


The branch-and-bound design strategy is very similar to

backtracking in that a state space tree is used to solve


a problem.
The differences are that the branch-and-bound method

1) does not limit us to any particular way of traversing


the tree, and 2) is used only for optimization problems.
A branch-and-bound algorithm computes a number

(bound) at a node to determine whether the node is


promising.

Branch and Bound

An enhancement of backtracking
Applicable to optimization problems
Uses a lower bound for the value of
the objective function for each node
(partial solution) so as to:
guide the search through state-space
rule out certain branches as
unpromising

The assignment problem


We want to assign n people to n jobs
so that the total cost of the
assignment is as small as possible
(lower bound)

Example: The assignment problem


Select one element in each row of the cost matrix C so that:
no two selected elements are in the same column; and
the sum is minimized
For example:
Job 1 Job 2 Job 3 Job 4
Person a
9
2
7
8
Person b
6
4
3
7
Person c
5
8
1
8
Person d
7
6
9
4
Lower bound: Any solution to this problem will have
total cost of at least:
sum of the smallest element in each
row = 10

Traveling Salesman Problem


Can apply branch & bound if we come up with
a reasonable lower bound on tour lengths.
Simple lower bound = finding smallest
element in the intercity distance matrix D and
multiplying it by number of cities n
Alternate solution:
For each city i, find the sum si of the distances from
city i to the two nearest cities;
compute the sum s of these n numbers;
divide the result by 2
and if all the distances are integers, round up the
result to the nearest integer;

Traveling salesman
example:
lb =
[(1+3)+(3+6)+(1+2)+(3+4)+(2+3)]/2
= 14

Example
18

C
3

15
12

A
6

B
5

4
5

22

10
19

H
4

THANK YOU

You might also like