Dynamic Programming

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 59

Dynamic Programming

10/31/2024 1
Approaches for solving DP
Problems
Greedy DP Brute Force
- typically not optimal - Optimal solution - Optimal solution
solution (for DP-type - Write math function, sol, that - Produce all possible combinations,
problems) captures the dependency of solution [check if valid], and keep the best.
- Build solution to current pb on solutions to smaller - Time: exponential
- Use a criterion for picking problems - Space: depends on
- Commit to a choice and - Can be implemented in any of the implementation
do not look back following: iterative, memoized,
- It may be hard to generate all
recursive
possible combinations

Iterative (bottom-up) - BEST Memoized Recursive


- Optimal solution - Optimal solution - Optimal solution
- sol is an array (1D or 2D). Size: N+1 - Combines recursion and - Time: exponential
- Fill in sol from 0 to N usage of sol array. (typically) =>
- Time: polynomial (or pseudo- - sol is an array (1D or 2D) - DO NOT USE
polynomial for some problems) - Fill in sol from 0 to n - Space: depends on
- Space: polynomial (or pseudo- - Time: same as iterative implementation (code). E.g.
polynomial version (typically) store all combinations, or
- To recover the choices that gave the - Space: same as iterative generate, evaluate on the fly
optimal answer, must backtrace => version (typically) + space for and keep best seen so far.
must keep picked array (1D or 2D). frame stack. (Frame stack - Easy to code given math
depth is typically smaller function
than the size of the sol array)

Improve space usage DP can solve:


- Improves the iterative solution - some type of counting problems (e.g. stair climbing)
- Saves space
- If used, cannot recover the choices
- some type of optimization problems (e.g. Knapsack)
(gives the optimal value, but not the - some type of recursively defined pbs (e.g. Fibonacci)
choices)
SOME DP solutions have pseudo polynomial time
Dynamic Programming (DP)
• Dynamic programming (DP) applies when a problem has both of these
properties:
1. Optimal substructure: “optimal solutions to a problem incorporate optimal solutions to
related subproblems, which we may solve independently”.
2. Overlapping subproblems: “a recursive algorithm revisits the same problem
repeatedly”.
• Dynamic Programming is a paradigm of algorithm design in which an
optimization problem is solved by a combination of achieving sub-problem
solutions and appearing to the "principle of optimality"
• Dynamic programming is typically used to:
• Solve optimization problems that have the above properties.
• Solve counting problems –e.g. Stair Climbing or Matrix Traversal.
• Speed up existing recursive implementations of problems that have overlapping
subproblems (property 2) – e.g. Fibonacci.
• Compare dynamic programming with divide and conquer. 4
Components of Dynamic programming
1. Stages: The problem can be divided into several sub problems, which are called stages. A stage is
a small portion of a given problem. For example, in the shortest path problem, they were defined by
the structure of the graph.
2. States: Each stage has several states associated with it. The states for the shortest path problem
were the node reached.
3. Decision: At each stage, there can be multiple choices out of which one of the best decisions
should be taken. The decision taken at every stage should be optimal; this is called a stage decision.
4. Optimal policy: It is a rule which determines the decision at each stage; a policy is called an
optimal policy if it is globally optimal. This is known as Bellman principle of optimality.
5. Given the current state, the optimal choices for each of the remaining states do not depend on the
previous states or decisions. In the shortest path problem, it was not necessary to know how we got a
node only that we did.
6. There exists a recursive relationship that identifies the optimal decisions for stage j, given that
stage j+1, has already been solved.
7. The final stage must be solved by itself
Applications of dynamic programming

1. 0/1 knapsack problem


2. All pair Shortest path problem
3. Reliability design problem
4. Longest common subsequence (LCS)
5. Flight control and robotics control
6. Time-sharing: It schedules the job to maximize CPU usage
Bottom-Up vs. Top Down

• There are two versions of dynamic programming.


• Bottom-up.
• Top-down (or memoization).
• Bottom-up:
• Iterative, solves problems in sequence, from smaller to bigger.
• Top-down:
• Recursive, start from the larger problem, solve smaller problems as needed.
• For any problem that we solve, store the solution, so we never have to compute the same
solution twice.
• This approach is also called memoization.

7
Top-Down Dynamic Programming
( Memoization )
• Maintain an array/table where solutions to problems can be saved.
• To solve a problem P:
• See if the solution has already been stored in the array.
• If yes, return the solution.
• Else:
• Issue recursive calls to solve whatever smaller problems we need to solve.
• Using those solutions obtain the solution to problem P.
• Store the solution in the solutions array.
• Return the solution.

8
Iterative or Bottom-Up
Dynamic Programming
• Main type of solution for DP problems
• We can define the problems size and solve problems from size 0 going
up to the size we need.
• Iterative – because it uses a loop
• Bottom-up because you solve problems from the bottom (the smallest
problem size) up to the original problem size.

9
Steps for iterative (bottom up) solution
1. Identify trivial problems
1. typically where the size is 0
Other types of solutions:
2. Look at the last step/choice in an optimal solution: 1. Brute force solution
1. Assuming an optimal solution, what is the last action in
completing it? 2. Recursive solution (most
2. Are there more than one options for that last action? likely exponential and
3. If you consider each action, what is the smaller problem inefficient)
that you would combine with that last action? 3. Memoized solution
1. Assume that you have the optimal answer to that smaller problem.
4. Generate all these solutions
5. Compute the value (gain or cost) for each of these
solutions.
6. Keep the optimal one (max or min based on problem)
3. Make a 1D or 2D array and start feeling in answers
from smallest to largest problems.

10
The two common dynamic programming approaches are:
• Memoization: Known as the “top-down” dynamic programming, usually
the problem is solved in the direction of the main problem to the base
cases.
• Tabulation: Known as the “bottom-up '' dynamic programming, usually
the problem is solved in the direction of solving the base cases to the main
problem
Note: The base case does not always mean smaller input. It depends upon
the implementation of the algorithm.
0,1,1,2,3,5,8,13,21,...
0,1,1,2,3,5,8,13,21,...
Steps to memoize a recursive solution:
Any recursive solution to a problem can be memoized using these three steps:
1.Create a dp[n+1] array initialized to -1.
2.Whenever we want to find the answer of a particular value (say n), we first check
whether the answer is already calculated using the dp array(i.e dp[n]!= -1 ). If yes,
simply return the value from the dp array.
3.If not, then we are finding the answer for the given value for the first time, we will
use the recursive relation as usual but before returning from the function, we will
set dp[n] to the solution we get.
Steps to memoize a recursive solution:
Steps to memoize a recursive solution:
Tabulation method

1 2 3 4 5
0 1 1 2 3
Shortest Path Problem
• Shortest path problem is a problem of finding the
shortest path(s) between vertices of a given graph.
• Shortest path between two vertices is a path that has
the least cost as compared to all other existing paths.
• Shortest path algorithms are a family of algorithms
used for solving the shortest path problem
• Shortest path algorithms have a wide range of
applications such as in-
• Google Maps
• Road Networks
• Logistics Research
Shortest Path Problem
Single-Pair Shortest Path Problem-
• It is a shortest path problem where the shortest path between a given pair of vertices is
computed.
• A* Search Algorithm is a famous algorithm used for solving single-pair shortest path problem.

Single-Source Shortest Path Problem-


• It is a shortest path problem where the shortest path from a given source vertex to all other
remaining vertices is computed.
• Dijkstra’s Algorithm and Bellman Ford Algorithm are the famous algorithms used for solving
single-source shortest path problem.

Single-Destination Shortest Path Problem-


• It is a shortest path problem where the shortest path from all the vertices to a single
destination vertex is computed.
• By reversing the direction of each edge in the graph, this problem reduces to singlesource
shortest path problem.
• Dijkstra’s Algorithm is a famous algorithm adapted for solving single-destination shortest path
problem.

All Pairs Shortest Path Problem-


Single-Source Shortest Path Problem

• It is a shortest path problem where the shortest path


from a given source vertex to all other remaining
vertices is computed.
• • Dijkstra’s Algorithm and Bellman Ford Algorithm are
the famous algorithms used for solving single-source
shortest path problem.
Applications
- Maps (Map Quest, Google Maps)
- Routing Systems
Dijkstra's algorithm
Dijkstra's algorithm - is a solution to the single-source
shortest path problem in graph theory.

Works on both directed and undirected graphs. However, all


edges must have nonnegative weights.

Input: Weighted graph G={E,V} and source vertex v∈V, such


that all edge weights are nonnegative

Output: Lengths of shortest paths (or the shortest paths


themselves) from a given source vertex v∈V to all other
vertices
Dijkstra's algorithm
If (d[u]+c[u,v] < d[v])
d[v] = d[u]+c[u,v];

Selected Visited set d[2] d[3] d[4] d[5] d[6] d[7]


Vertex
1 {1} 10 ∞ ∞ ∞ 30 ∞
2 {1,2} 10 30 ∞ ∞ 30 ∞
3 {1,2,3} 10 30 45 35 30 ∞
6 {1,2,3,6} 10 30 45 35 30 65
5 {1,2,3,6,5} 10 30 45 35 30 42
4 {1,2,3,6,5,4} 10 30 45 35 30 42
Solve the following
Bellman Ford Algorithm
 Based on the bottom-up approach of dynamic programming.
 Calculates the shortest path from a single vertex to all nodes in a weighted graph.
 Unlike other algorithms like Dijkstra, Kruskal’s, and Prim’s, Bellman Ford's algorithm works with
graphs with negative edge weights.
 It guarantees the correct and optimized shortest path between vertices.
 Based on the "Principle of Relaxation," Bellman Ford solves the problem of negative cycle in graphs.
 The negative weight of the graph must be taken care otherwise it will lead to incorrect results.
 The main idea behind the working of the bellman ford algorithm is, it begins with a single source and
estimates the distance to each node. Initially, the distance is unknown and estimated as infinity, later on
the algorithm relaxes those path by finding out few shorter paths.
 That’s why it is said that the algorithm is based on the “Principle of relaxation”.
 To ensure that the obtained result is optimized or not, repeat the steps for all the vertices. If you find
any new shorter path for any vertex, this means previous result was not optimized.
Bellman Ford Algorithm-Working
Step-by-step working of the algorithm is as follows:
Step-1: Initialize the distance value Infinite to every other vertex and set distance value 0 to the source node
itself.
Step-2: Visit each edge and relax the path if the previous path was not accurate. To relax the path for vertices
and for edge u-v:
If, (d(u) + c(u , v) < d(v))
d(v) = d(u) + c(u , v)
the above equation means if the sum of distance value of source node and cost of moving from source to
destination is less than distance of source vertex. Then, the distance value of destination vertex from the
source vertex will be equals to distance value of the source and cost of reaching the destination from source.
Step-3: If the new distance value is less than the previous one, then update the distance value in each iteration
for the edges. The distance value to every node is the total distance from the starting vertex to that particular
vertex.
Step-4: Repeat the above steps multiple iterations, to ensure the obtained result is optimized.
If (d[u]+c[u,v] < d[v])
d[v] = d[u]+c[u,v];
Bellman Ford Algorithm
No of iteration= n-1
Step 1. List all the edges
nno of vertices
Edges Cost
A-B 6 6-1=5 iterations
Initial
A-C 4
A-D 5 Vertex A B C D E F
B-E -1 Distance 0 ∞ ∞ ∞ ∞ ∞
C-B -2 Predecessor ‫־‬ ‫־‬ ‫־‬ ‫־‬ ‫־‬ ‫־‬
C-E 3
D-C -2
D-F -1 Iteration 1
E-F 3 Vertex A B C D E F
Distance 0 6 4 5 ∞ ∞
Predecessor ‫־‬ A A A ‫־‬ ‫־‬
If (d[u]+c[u,v] < d[v])
d[v] = d[u]+c[u,v];
Bellman Ford Algorithm
No of iteration= n-1
nno of vertices
Step 1. List all the edges 6-1=5 iterations
Edges Cost
Iteration 2
A-B 6
A-C 4 Vertex A B C D E F
A-D 5 Distance 0
62 1 4 5 5 1 4
B-E -1
C-B -2
Predecessor ‫־‬ AC A A B D

C-E 3
D-C -2 Iteration 3
D-F -1
Vertex A B C D E F
E-F 3
Distance 0
1 4 3 5 10 43
Predecessor ‫־‬ C D A B E
If (d[u]+c[u,v] < d[v])
d[v] = d[u]+c[u,v];
Bellman Ford Algorithm
No of iteration= n-1
nno of vertices
Step 1. List all the edges 6-1=5 iterations
Edges Cost
Iteration 4
A-B 6
Vertex A B C D E F
A-C 4
A-D 5 Distance 0
1 3 5 0 3
B-E -1 Predecessor ‫־‬ C D A B E
C-B -2
C-E 3
D-C -2
D-F -1
E-F 3
Bellman Ford Algorithm
Time Complexity:
Best Case O (E)
Average Case O (VE)
Worst Case O (VE)
Space complexity: O (V)
Here, V is the number of vertices and E is the number of edges
Applications
Network Routing: The algorithm can be used to f ind the shortest path in a network with negative weights, such
as the internet.
Robot Navigation: The algorithm can be used to find the shortest path f or a robot to navigate in an
environment with obstacles.
Resource Allocation: The algorithm can be used to allocate resources optimally, such as assigning tasks to
workers in a factory.
Image Processing: This algorithm can be used to find the shortest path between two points in an image, it can
be useful in image segmentation and object recognition.
Game Theory: This algorithm can be used to find the optimal strategy f or players in a game, such as in the
game of chess.
Genetics: The algorithm can be used to find the shortest path in a genetic network, which can be used to
analyze genetic interactions and identify potential drug targets.
Solve the following
All pairs shortest path-Floyd Warshall algorithm
 It is used to solve All Pairs Shortest Path Problem.
 It computes the shortest path between every pair of vertices of the given graph.
 Floyd Warshall Algorithm is an example of dynamic programming approach.
 Floyd Warshall Algorithm has the following main advantages-
 It is extremely simple.
 It is easy to implement.

Time Complexity-
 Floyd Warshall Algorithm consists of three loops over all the nodes.
 The inner most loop consists of only constant complexity operations.
 Hence, the asymptotic complexity of Floyd Warshall algorithm is O(n 3).
 Here, n is the number of nodes in the given graph.
Space Complexity:
 The space complexity of Floyd Warshall Algorithm is O(n²).

Applications:
 Google maps, social network analysis, finding maximum bandwidth in network systems
All pairs shortest path-Floyd Warshall
algorithm
Step-01:
Remove all the self loops and parallel edges (keeping the lowest weight edge) from
the graph.
In the given graph, there are neither self edges nor parallel edges.
Step-02:
Write the initial distance matrix.
It represents the distance between every pair of vertices in the form of given weights.
For diagonal elements (representing self-loops), distance value = 0.
For vertices having a direct edge between them, distance value = weight of that edge.
For vertices having no direct edge between them, distance value = ∞.
Floyd Warshall algorithm-example
Dk [i , j] = min (Dk-1 [i , j], Dk-1 [i, k] + Dk-1 [k , j])
Floyd Warshall algorithm-example
Dk [i , j] = min (Dk-1 [i , j], Dk-1 [i, k] + Dk-1 [k , j])

D1[3, 2] = min(D0[3, 2], D0 [3, 1] + D0 [1 , 2])


D1[4, 2] = min(∞, 35)

D1[4, 2] = min(D0 [4, 2], D0 [4, 1] + D0 [1 , 2])


D1[4, 2] = min(∞, 20)
Floyd Warshall algorithm-example
Dk [i , j] = min (Dk-1 [i , j], Dk-1 [i, k] + Dk-1 [k , j])

D2[1, 3] = min(D1[1, 3], D1 [1, 2] + D1 [2 , 3])


D2[1, 3] = min(∞, 20)

D2[1, 4] = min(D1 [1, 4], D1 [1, 2] + D1 [2 , 4])


D2[1, 4] = min(∞, 10)
Floyd Warshall algorithm-example
Dk [i , j] = min (Dk-1 [i , j], Dk-1 [i, k] + Dk-1 [k , j])

D3[2, 1] = min(D2[2, 1], D2 [2, 3] + D2 [3 , 1])


D3[2, 1] = min(50, 45)
Floyd Warshall algorithm-example
Dk [i , j] = min (Dk-1 [i , j], Dk-1 [i, k] + Dk-1 [k , j])

D4[1, 3] = min(D3[1, 3], D3 [1, 4] + D3 [4 , 3])


D4[1, 3] = min(20, 15)

D4[2, 1] = min(D3[2, 1], D3 [2, 4] + D3 [4 , 1])


D4[2, 1] = min(45, 20)

D4[2, 3] = min(D3[2, 3], D3 [2, 4] + D3 [4 , 3])


D4[2, 3] = min(15, 10)
Solve the following
Optimal Binary Search Trees
 A binary search tree is a tree in which the nodes in the left subtree are smaller than the root node,
while the nodes in the right subtree are greater than the root node.
 When we talk about binary search trees, the searching concept comes first. The cost of searching for
a node in the tree plays a very important role.
 Our purpose is to reduce the cost of searching, and that can only be done by an optimal binary
search tree.
 The optimal binary search tree (Optimal BST) is also known as a weight-balanced search tree. It
is a binary search tree that provides the shortest possible search time or expected search time. An
optimal binary search tree is helpful in dictionary search.
 For example: 15, 25, 35, 45, 55, 65, 75
The maximum time required to search a node
in a binary search tree is equal to the minimum
height of the tree i.e. logn.
Optimal Binary Search Trees
If we have keys: 15, 25, 35, then see how many BST can be formed. We will use the formula
given below:

So we can see that 5 trees will be formed as shown in the figure given below.

Avg. number of Avg. number of Avg. number of


Avg. number of Avg. number of
comparisons: = comparisons: = comparisons: = comparisons: = comparisons: =
2 2 5/3<2 2 2
Optimal Binary Search Trees
Key A B C D
Probability 0.1 0.2 0.4 0.3
Using formulae:
C[i, i-1]=0
0 1 2 3 4 1 2 3 4 C[i, i]=Pi
1 1 R[i, i]=i
2 2
3 3
4 RootR[1,n]=R[1,4]=3
4
LeftR[i,k-1]
5
RightR[k+1,j]
Root table
Cost table
𝐣
𝐜 [ 𝐢 , 𝐣 ] = 𝐦𝐢𝐧 {𝐜 [𝐢 , 𝐤 −𝟏]+ 𝐜 [ 𝐤 +𝟏, 𝐣 ]+ ∑ 𝐏𝐬 }
𝐢 ≤𝐤 ≤ 𝐣 𝐬 =𝐢
Using formulae:
Optimal Binary Search Trees C[i, i-1]=0
C[i, i]=Pi RootR[1,n]=R[1,4]=3
Key A B C D LeftR[i,k-1]
R[i, i]=i RightR[k+1,j]
Probability 0.1 0.2 0.4 0.3

C[i, i-1]=0 C[i, i]=Pi


0 1 2 3 4 1 2 3 4 C[1,0]=0 C[1,1]=1
1 0 0.1 1 1 C[2,1]=0 C[2,2]=2
C[3,2]=0 C[3,3]=3
2 0 0.2 2 C[4,3]=0 C[4,4]=4
2
3 0 0.4 3 C[5,4]=0 C[5,5]=5
3
4 0 0.3 4 4
5 0 Root table R[i, i]=i
Cost table R[1, 1]=1
𝐣 R[2, 2]=2
𝐜 [ 𝐢 , 𝐣 ] = 𝐦𝐢𝐧 {𝐜 [𝐢 , 𝐤 −𝟏]+ 𝐜 [ 𝐤 +𝟏, 𝐣 ]+ ∑ 𝐏𝐬 } R[3, 3]=3
R[4, 4]=4
𝐢 ≤𝐤 ≤ 𝐣 𝐬 =𝐢 R[5, 5]=5
Using formulae:
Optimal Binary Search Trees C[i, i-1]=0
C[i, i]=Pi RootR[1,n]=R[1,4]=3
Key A B C D LeftR[i,k-1]
R[i, i]=i RightR[k+1,j]
Probability 0.1 0.2 0.4 0.3

0 1 2 3 4 1 2 3 4
1 0 0.1 0.4 1 1 2
2 0 0.2 2 2
3 0 0.4 3 3
4 0 0.3 4 4
5 0 Root table 𝟐
Cost table
𝐣 𝐜 [ 𝟏,𝟐 ] = 𝐦𝐢𝐧 {𝐜 [𝟏, 𝐤 −𝟏]+𝐜 [ 𝐤 +𝟏,𝟐]+ ∑ 𝐏𝐬 } K= 1, 2

𝐜 [ 𝐢 , 𝐣 ] = 𝐦𝐢𝐧 {𝐜 [𝐢 , 𝐤 −𝟏]+ 𝐜 [ 𝐤 +𝟏, 𝐣 ]+ ∑ 𝐏𝐬 } 𝟏 ≤ 𝐤 ≤𝟐 𝐬 =𝟏
C[1,2]= C[1,0]+C[2,2]+P1+P2  0+0.2+0.1+0.2  0.5
𝐢 ≤𝐤 ≤ 𝐣 𝐬 =𝐢
K=1
C[1,2]= C[1,1]+C[3,2]+P1+P2  0.1+0+0.1+0.2  0.4
K=2
Using formulae: RootR[1,n]=R[1,4]=3
Optimal Binary Search Trees C[i, i-1]=0 LeftR[i,k-1]
Key A B C D C[i, i]=Pi RightR[k+1,j]
𝐣
Probability 0.1 0.2 0.4 0.3 R[i, i]=i 𝐜 [ 𝐢 , 𝐣 ] = 𝐦𝐢𝐧 {𝐜 [𝐢 , 𝐤 −𝟏]+ 𝐜 [ 𝐤 +𝟏, 𝐣 ]+ ∑ 𝐏𝐬 }
𝐢 ≤𝐤 ≤ 𝐣 𝐬 =𝐢
0 1 2 3 4 C[2,3] K=2 C[2,1]+C[3,3]+0.2+0.4 =0+0.4+0.2+0.4 =1.0
1 0 0.1 0.4 1.1 1.7
K=3 C[2,2]+C[4,3]+0.2+0.4 =0.2+0+0.2+0.4 =0.8

2 0.2 1.4 C[3,4] K=3 C[3,2]+C[4,4]+0.4+0.3 =0+0.3+0.4+0.3 =1.0


0 0.8
K=4 C[3,3]+C[5,4]+0.4+0.3 =0.4+0+0.4+0.3 =1.1
3 0 0.4 1.0
C[1,3] K=1 C[1,0]+C[2,3]+0.1+0.2+0.4 =0+0.8+0.1+0.2+0.4 =1.5
4 0 0.3
5 K=2 C[1,1]+C[3,3]+0.1+0.2+0.4 =0.1+0.4+0.1+0.2+0.4 =1.2
0
K=3 C[1,2]+C[4,3]+0.1+0.2+0.4 =0.4+0+0.1+0.2+0.4 =1.1
Cost table
1 2 3 4 C[2,4] K=2 C[2,1]+C[3,4]+0.2+0.4+0.3 =0+1.0+0.2+0.4+0.3 =1.9
1 1 2 3 4 K=3 C[2,2]+C[4,4]+0.2+0.4+0.3 =0.2+0.3+0.2+0.4+0.3 =1.4
K=4 C[2,3]+C[5,4]+0.2+0.4+0.3 =0.8+0+0.2+0.4+0.3 =1.7
2 2 3 3
C[1,4] K=1 C[1,0]+C[2,4]+0.1+0.2+0.4+0.3 =0+1.4+0.1+0.2+0.4+0.3 =2.4
3 3 3 K=2 C[1,1]+C[3,4]+0.1+0.2+0.4+0.3 =0.1+1+0.1+0.2+0.4+0.3 =2.1
4 4 K=3 C[1,2]+C[4,4]+0.1+0.2+0.4+0.3 =0.4+0.3+0.1+0.2+0.4+0.3 =1.7
Root table K=4 C[1,3]+C[5,4]+0.1+0.2+0.4+0.3 =0.4+0+0.1+0.2+0.4+0.3 =2.3
Using formulae:
Optimal Binary Search Trees C[i, i-1]=0
C[i, i]=Pi
Key A B C D R[i, i]=i
Probability 0.1 0.2 0.4 0.3
RootR[1,n]=R[1,4]=3
LeftR[i,k-1]
0 1 2 3 4 1 2 3 4 RightR[k+1,j]
1 0 0.1 ? 0.4 ? 1.1 ? 1.7 1 1 ?2 ?3 ?3
R[1,4] k=3
2 0 0.2 ? 0.8 ? 1.4 2 ?3 ?3
2
3 0 0.4 ? 1.0 3 ?3
3 R[1,2] k=2
4 0.3 R[4,4] k=4
0 4 4
5 0 Root table
Cost table R[1,1] k=1
C
𝐣
𝐜 [ 𝐢 , 𝐣 ] = 𝐦𝐢𝐧 {𝐜 [𝐢 , 𝐤 −𝟏]+ 𝐜 [ 𝐤 +𝟏, 𝐣 ]+ ∑ 𝐏𝐬 } B D
𝐢 ≤𝐤 ≤ 𝐣 𝐬 =𝐢 A
Optimal Binary Search Trees
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … w n
values: v1 v2 … v n
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack

Consider instance defined by first i items and capacity j (j  W).


Let V[i,j] be optimal value of such an instance. Then

{
V[i,j] =
max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi  0

V[i-1,j] if j- wi < 0

Initial conditions: V[0,j] = 0 and V[i,0] = 0


Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value X1 X2 X3 X4
1 2 $12 1 1 0 1
2 1 $10
3 3 $20
4 2 $15 capacity j 37-15=22
22-10=12
0 1 2 3 4 5
0 0 0 12-12=0
0 0 0 12
w1 = 2, v1= 12 1 0 10 12 22 22 22 Backtracing
finds the actual
w2 = 1, v2= 10 2 0 10 12 22 30 32 optimal subset,
w = 3, v = 20 3 0 10 15 25 30 37
3 3
i.e. solution.

w4 = 2, v4= 15 4 ?
Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(w[1..n], v[1..n], W)
var V[0..n,0..W], P[1..n,1..W]: int
for j := 0 to W do
V[0,j] := 0
for i := 0 to n do Running time and space:
O(nW).
V[i,0] := 0
for i := 1 to n do
for j := 1 to W do
if w[i]  j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then
V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i]
else
V[i,j] := V[i-1,j]; P[i,j] := j
return V[n,W] and the optimal subset by backtracing
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
↓==remove Replace Remove

→==insert ↘ ↓
↘==replace Insert →
Null a b c f g

Null
2.if r==c
a

d
Just copy the diagonal
c

g
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove

Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1
→==insert
d 2 ↘==replace
2.if r==c
c 3

e 4
Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove

Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
1+1 2+1 3+1
d 2
0+1 ↘==replace
2.if r==c
c 3

e 4
Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove

Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
2+1 3+1
d 2 1
0+1 1+1
1 2 3 4 ↘==replace
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3

e 4
Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove

Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
2+1 3+1
d 2 1
0+1
1
1+1
2 3 4 ↘==replace
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3 2 2 1 2 3

e 4
1+1 1+1 2+1 3+1 Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove

Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
2+1 3+1
d 2 1
0+1
1
1+1
2 3 4 ↘==replace
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3 2 2 1 2 3

e 4
1+1 1+1
3 3 2
2+1
2 3
3+1 Just copy the diagonal
2+1 2+1 1+1 1+1 2+1
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove

Null a b c f g ↘ ↓ ↓==remove
Null 0 1 2 3 4 5 Insert →
→==insert
a 1 0 1 2 3 4 ↘==replace
0+1 1+1 2+1 3+1
d 2 1 1 2 3 4
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3 2 2 1 2 3
1+1 1+1 2+1 3+1 Just copy the
e 4 3 3 2 2
1+1
3
2+1
Replace diagonal
2+1 2+1 1+1
g 5 4 4
3+1 3+1
3 3 2
2+1 2+1

If we perform 2 operation, the string can be edited from a b c f g  a d c e g


a b c f g
a d c e g
String editing-Solve the following

1.Input:
s1="abcdef", s2="abdfgh“
2. Input
s1="karolin", s2="kathrin"
Travelling Salesperson problem

You might also like