Dynamic Programming
Dynamic Programming
Dynamic Programming
10/31/2024 1
Approaches for solving DP
Problems
Greedy DP Brute Force
- typically not optimal - Optimal solution - Optimal solution
solution (for DP-type - Write math function, sol, that - Produce all possible combinations,
problems) captures the dependency of solution [check if valid], and keep the best.
- Build solution to current pb on solutions to smaller - Time: exponential
- Use a criterion for picking problems - Space: depends on
- Commit to a choice and - Can be implemented in any of the implementation
do not look back following: iterative, memoized,
- It may be hard to generate all
recursive
possible combinations
7
Top-Down Dynamic Programming
( Memoization )
• Maintain an array/table where solutions to problems can be saved.
• To solve a problem P:
• See if the solution has already been stored in the array.
• If yes, return the solution.
• Else:
• Issue recursive calls to solve whatever smaller problems we need to solve.
• Using those solutions obtain the solution to problem P.
• Store the solution in the solutions array.
• Return the solution.
8
Iterative or Bottom-Up
Dynamic Programming
• Main type of solution for DP problems
• We can define the problems size and solve problems from size 0 going
up to the size we need.
• Iterative – because it uses a loop
• Bottom-up because you solve problems from the bottom (the smallest
problem size) up to the original problem size.
9
Steps for iterative (bottom up) solution
1. Identify trivial problems
1. typically where the size is 0
Other types of solutions:
2. Look at the last step/choice in an optimal solution: 1. Brute force solution
1. Assuming an optimal solution, what is the last action in
completing it? 2. Recursive solution (most
2. Are there more than one options for that last action? likely exponential and
3. If you consider each action, what is the smaller problem inefficient)
that you would combine with that last action? 3. Memoized solution
1. Assume that you have the optimal answer to that smaller problem.
4. Generate all these solutions
5. Compute the value (gain or cost) for each of these
solutions.
6. Keep the optimal one (max or min based on problem)
3. Make a 1D or 2D array and start feeling in answers
from smallest to largest problems.
10
The two common dynamic programming approaches are:
• Memoization: Known as the “top-down” dynamic programming, usually
the problem is solved in the direction of the main problem to the base
cases.
• Tabulation: Known as the “bottom-up '' dynamic programming, usually
the problem is solved in the direction of solving the base cases to the main
problem
Note: The base case does not always mean smaller input. It depends upon
the implementation of the algorithm.
0,1,1,2,3,5,8,13,21,...
0,1,1,2,3,5,8,13,21,...
Steps to memoize a recursive solution:
Any recursive solution to a problem can be memoized using these three steps:
1.Create a dp[n+1] array initialized to -1.
2.Whenever we want to find the answer of a particular value (say n), we first check
whether the answer is already calculated using the dp array(i.e dp[n]!= -1 ). If yes,
simply return the value from the dp array.
3.If not, then we are finding the answer for the given value for the first time, we will
use the recursive relation as usual but before returning from the function, we will
set dp[n] to the solution we get.
Steps to memoize a recursive solution:
Steps to memoize a recursive solution:
Tabulation method
1 2 3 4 5
0 1 1 2 3
Shortest Path Problem
• Shortest path problem is a problem of finding the
shortest path(s) between vertices of a given graph.
• Shortest path between two vertices is a path that has
the least cost as compared to all other existing paths.
• Shortest path algorithms are a family of algorithms
used for solving the shortest path problem
• Shortest path algorithms have a wide range of
applications such as in-
• Google Maps
• Road Networks
• Logistics Research
Shortest Path Problem
Single-Pair Shortest Path Problem-
• It is a shortest path problem where the shortest path between a given pair of vertices is
computed.
• A* Search Algorithm is a famous algorithm used for solving single-pair shortest path problem.
C-E 3
D-C -2 Iteration 3
D-F -1
Vertex A B C D E F
E-F 3
Distance 0
1 4 3 5 10 43
Predecessor ־ C D A B E
If (d[u]+c[u,v] < d[v])
d[v] = d[u]+c[u,v];
Bellman Ford Algorithm
No of iteration= n-1
nno of vertices
Step 1. List all the edges 6-1=5 iterations
Edges Cost
Iteration 4
A-B 6
Vertex A B C D E F
A-C 4
A-D 5 Distance 0
1 3 5 0 3
B-E -1 Predecessor ־ C D A B E
C-B -2
C-E 3
D-C -2
D-F -1
E-F 3
Bellman Ford Algorithm
Time Complexity:
Best Case O (E)
Average Case O (VE)
Worst Case O (VE)
Space complexity: O (V)
Here, V is the number of vertices and E is the number of edges
Applications
Network Routing: The algorithm can be used to f ind the shortest path in a network with negative weights, such
as the internet.
Robot Navigation: The algorithm can be used to find the shortest path f or a robot to navigate in an
environment with obstacles.
Resource Allocation: The algorithm can be used to allocate resources optimally, such as assigning tasks to
workers in a factory.
Image Processing: This algorithm can be used to find the shortest path between two points in an image, it can
be useful in image segmentation and object recognition.
Game Theory: This algorithm can be used to find the optimal strategy f or players in a game, such as in the
game of chess.
Genetics: The algorithm can be used to find the shortest path in a genetic network, which can be used to
analyze genetic interactions and identify potential drug targets.
Solve the following
All pairs shortest path-Floyd Warshall algorithm
It is used to solve All Pairs Shortest Path Problem.
It computes the shortest path between every pair of vertices of the given graph.
Floyd Warshall Algorithm is an example of dynamic programming approach.
Floyd Warshall Algorithm has the following main advantages-
It is extremely simple.
It is easy to implement.
Time Complexity-
Floyd Warshall Algorithm consists of three loops over all the nodes.
The inner most loop consists of only constant complexity operations.
Hence, the asymptotic complexity of Floyd Warshall algorithm is O(n 3).
Here, n is the number of nodes in the given graph.
Space Complexity:
The space complexity of Floyd Warshall Algorithm is O(n²).
Applications:
Google maps, social network analysis, finding maximum bandwidth in network systems
All pairs shortest path-Floyd Warshall
algorithm
Step-01:
Remove all the self loops and parallel edges (keeping the lowest weight edge) from
the graph.
In the given graph, there are neither self edges nor parallel edges.
Step-02:
Write the initial distance matrix.
It represents the distance between every pair of vertices in the form of given weights.
For diagonal elements (representing self-loops), distance value = 0.
For vertices having a direct edge between them, distance value = weight of that edge.
For vertices having no direct edge between them, distance value = ∞.
Floyd Warshall algorithm-example
Dk [i , j] = min (Dk-1 [i , j], Dk-1 [i, k] + Dk-1 [k , j])
Floyd Warshall algorithm-example
Dk [i , j] = min (Dk-1 [i , j], Dk-1 [i, k] + Dk-1 [k , j])
So we can see that 5 trees will be formed as shown in the figure given below.
0 1 2 3 4 1 2 3 4
1 0 0.1 0.4 1 1 2
2 0 0.2 2 2
3 0 0.4 3 3
4 0 0.3 4 4
5 0 Root table 𝟐
Cost table
𝐣 𝐜 [ 𝟏,𝟐 ] = 𝐦𝐢𝐧 {𝐜 [𝟏, 𝐤 −𝟏]+𝐜 [ 𝐤 +𝟏,𝟐]+ ∑ 𝐏𝐬 } K= 1, 2
𝐜 [ 𝐢 , 𝐣 ] = 𝐦𝐢𝐧 {𝐜 [𝐢 , 𝐤 −𝟏]+ 𝐜 [ 𝐤 +𝟏, 𝐣 ]+ ∑ 𝐏𝐬 } 𝟏 ≤ 𝐤 ≤𝟐 𝐬 =𝟏
C[1,2]= C[1,0]+C[2,2]+P1+P2 0+0.2+0.1+0.2 0.5
𝐢 ≤𝐤 ≤ 𝐣 𝐬 =𝐢
K=1
C[1,2]= C[1,1]+C[3,2]+P1+P2 0.1+0+0.1+0.2 0.4
K=2
Using formulae: RootR[1,n]=R[1,4]=3
Optimal Binary Search Trees C[i, i-1]=0 LeftR[i,k-1]
Key A B C D C[i, i]=Pi RightR[k+1,j]
𝐣
Probability 0.1 0.2 0.4 0.3 R[i, i]=i 𝐜 [ 𝐢 , 𝐣 ] = 𝐦𝐢𝐧 {𝐜 [𝐢 , 𝐤 −𝟏]+ 𝐜 [ 𝐤 +𝟏, 𝐣 ]+ ∑ 𝐏𝐬 }
𝐢 ≤𝐤 ≤ 𝐣 𝐬 =𝐢
0 1 2 3 4 C[2,3] K=2 C[2,1]+C[3,3]+0.2+0.4 =0+0.4+0.2+0.4 =1.0
1 0 0.1 0.4 1.1 1.7
K=3 C[2,2]+C[4,3]+0.2+0.4 =0.2+0+0.2+0.4 =0.8
{
V[i,j] =
max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi 0
V[i-1,j] if j- wi < 0
w4 = 2, v4= 15 4 ?
Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(w[1..n], v[1..n], W)
var V[0..n,0..W], P[1..n,1..W]: int
for j := 0 to W do
V[0,j] := 0
for i := 0 to n do Running time and space:
O(nW).
V[i,0] := 0
for i := 1 to n do
for j := 1 to W do
if w[i] j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then
V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i]
else
V[i,j] := V[i-1,j]; P[i,j] := j
return V[n,W] and the optimal subset by backtracing
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
↓==remove Replace Remove
→==insert ↘ ↓
↘==replace Insert →
Null a b c f g
Null
2.if r==c
a
d
Just copy the diagonal
c
g
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove
Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1
→==insert
d 2 ↘==replace
2.if r==c
c 3
e 4
Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove
Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
1+1 2+1 3+1
d 2
0+1 ↘==replace
2.if r==c
c 3
e 4
Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove
Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
2+1 3+1
d 2 1
0+1 1+1
1 2 3 4 ↘==replace
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3
e 4
Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove
Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
2+1 3+1
d 2 1
0+1
1
1+1
2 3 4 ↘==replace
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3 2 2 1 2 3
e 4
1+1 1+1 2+1 3+1 Just copy the diagonal
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove
Null a b c f g ↘ ↓
↓==remove
Null 0 1 2 3 4 5 Insert →
a 1 0 1 2 3 4
→==insert
2+1 3+1
d 2 1
0+1
1
1+1
2 3 4 ↘==replace
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3 2 2 1 2 3
e 4
1+1 1+1
3 3 2
2+1
2 3
3+1 Just copy the diagonal
2+1 2+1 1+1 1+1 2+1
g 5
String Editing 1. if r≠c
min{(Replace, Remove, Insert)+1}
Replace Remove
Null a b c f g ↘ ↓ ↓==remove
Null 0 1 2 3 4 5 Insert →
→==insert
a 1 0 1 2 3 4 ↘==replace
0+1 1+1 2+1 3+1
d 2 1 1 2 3 4
0+1 0+1 1+1 2+1 3+1 2.if r==c
c 3 2 2 1 2 3
1+1 1+1 2+1 3+1 Just copy the
e 4 3 3 2 2
1+1
3
2+1
Replace diagonal
2+1 2+1 1+1
g 5 4 4
3+1 3+1
3 3 2
2+1 2+1
1.Input:
s1="abcdef", s2="abdfgh“
2. Input
s1="karolin", s2="kathrin"
Travelling Salesperson problem