Skip to main content

8. Dynamic Programming

Dynamic Programming

Dynamic programming (DP) is an optimization technique used in algorithms to solve complex problems by breaking them down into simpler subproblems. It’s highly effective when subproblems overlap, allowing the reuse of previously computed solutions to avoid redundant calculations.

What is Dynamic Programming?

In dynamic programming, a complex problem is solved by dividing it into simpler subproblems, solving each only once, and storing their solutions in a table (usually an array). This stored data helps avoid recomputation, making dynamic programming efficient for problems with overlapping subproblems.

Characteristics of Dynamic Programming Problems

  1. Optimal Substructure: The optimal solution of the problem can be constructed from the optimal solutions of its subproblems.
  2. Overlapping Subproblems: Subproblems are reused multiple times, making it beneficial to store their results.
  3. Memoization: Storing the results of expensive function calls and returning the cached result when the same inputs occur again.

Tabulation Approach (Bottom-Up)

In the tabulation method, the solution is built incrementally by solving all subproblems first and using their results to solve the larger problem. This approach is referred to as bottom-up dynamic programming because it starts from the smallest subproblem and gradually works its way up.

Example: Fibonacci Sequence (Tabulation)

def fibonacci(n):
fib_table = [0] * (n + 1)
fib_table[1] = 1

for i in range(2, n + 1):
fib_table[i] = fib_table[i - 1] + fib_table[i - 2]

return fib_table[n]

print(fibonacci(10)) # Output: 55

Memoization Approach (Top-Down)

Memoization is the technique of storing intermediate results of expensive function calls so they don’t need to be recomputed. In this approach, solutions to subproblems are saved as they are computed, typically using recursion.

Example: Fibonacci Sequence (Memoization)

def fibonacci(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo)
return memo[n]

print(fibonacci(10)) # Output: 55

Applications of Dynamic Programming

  1. Knapsack Problem: Solving optimization problems where items must be selected within a weight limit to maximize value.
  2. Longest Common Subsequence: Finding the longest subsequence common to two strings.
  3. Matrix Chain Multiplication: Finding the optimal way to multiply matrices, minimizing the total number of scalar multiplications.
  4. Shortest Path Algorithms: Such as Bellman-Ford, which finds the shortest path in a weighted graph with negative edge weights.

Dynamic Programming vs. Divide and Conquer

Though both dynamic programming and divide and conquer break the problem into subproblems, the key difference is that dynamic programming involves solving overlapping subproblems, whereas divide and conquer divides problems into smaller independent subproblems that do not overlap.


This structure now includes the fundamental concepts, characteristics, and examples to help clarify dynamic programming techniques.