dynamic programming
dynamic programming
dynamic programming
The Fibonacci sequence is a great way to demonstrate dynamic programming. It has overlapping
subproblems, and solving each subproblem once can significantly improve efficiency.
#include <stdio.h>
int main() {
// Initialize the memoization array to -1 (indicating uncomputed values)
for (int i = 0; i < MAX; i++) {
memo[i] = -1;
}
return 0;
}
Explanation:
memo[] array is used to store the results of previously calculated Fibonacci numbers.
Each Fibonacci number is computed only once and reused when needed, making the time
complexity O(n)O(n)O(n) instead of O(2n)O(2^n)O(2n).
c
#include <stdio.h>
int fib_tab(int n) {
if (n <= 1) return n;
return table[n];
}
int main() {
int n = 10; // Example Fibonacci number to calculate
printf("Fibonacci number %d is %d\n", n, fib_tab(n));
return 0;
}
Explanation:
A bottom-up approach builds the solution from the smallest subproblems (starting from
fib(0) and fib(1)) up to the desired Fibonacci number.
The solution uses an iterative approach to fill the table[] array.
The 0/1 Knapsack problem is a classic problem where you need to maximize the total value of
items that can fit into a knapsack with a limited capacity.
#include <stdio.h>
int main() {
int weights[] = {2, 3, 4, 5}; // Weights of the items
int values[] = {3, 4, 5, 6}; // Values of the items
int W = 5; // Capacity of the knapsack
int n = sizeof(weights) / sizeof(weights[0]);
return 0;
}
Explanation:
A 2D dp array is used where dp[i][w] represents the maximum value obtainable using
the first i items with a knapsack capacity of w.
We iteratively fill the table based on whether to include or exclude an item.
#include <stdio.h>
int main() {
int weights[] = {2, 3, 4, 5}; // Weights of the items
int values[] = {3, 4, 5, 6}; // Values of the items
int W = 5; // Capacity of the knapsack
int n = sizeof(weights) / sizeof(weights[0]);
return 0;
}
Explanation:
Instead of using a 2D array, we use a 1D array dp[] where dp[w] stores the maximum
value achievable with capacity w.
We iterate from W down to the weight of the current item to avoid overwriting values that
haven't been computed yet for smaller capacities.
Conclusion:
Dynamic programming optimizes the time complexity by solving each subproblem only once,
leading to significant improvements in performance for problems with overlapping subproblems.