This article is a simple intro to dynamic programming. Dynamic programming is typically considered the hardest algorithm to work with. However, there is a systematic way to solve the problem and we just need to practice based on that method.
Dynamic programming is best suited for the situation that needs to eliminate duplicate computations by caching it. Now let’s take a look at what steps are necessary to solve the problem.
Steps to solve DP
- Define the state
- List out all the state transitions
- Implement a recursive solution
- Memoize (top-down)
- Make it bottom-up
Define the state
What are states in a dynamic programming context?
States are a set of parameters that define the system. Basically, we are trying to find the least number of parameters that change the state of the system. These are the inputs you are passing into the recursive function. Most of the time, it is just one or two parameters that define the state of the system.
After you define the state, you need to define the cost function and its return. It is the one we are trying to optimize for.
Example – knapsack problem
Provided a set of items, each with a weight and a value, determine the combination of each item to include in a bag so that the total weight is less than or equal to a given limit while having the largest total value possible.
1. W – Available capacity of the back (limit)
2. i – Index of the item being considered
Cost function: knapsack(W, i), returns the largest total value that is less than or equal to the limit
Define the state transitions and optimal choice
At this step, you need to find out how the recurrence relation would be and what parameters you will pass to the recursive function. And how do we combine the result?
In order to find out the recurrence relation, you first need to identify the base cases.
1. very last state, where there are no more items to process.
2. some parameters are 0 or reached a final value that we are not able to proceed further.
After identifying the base cases, we need to define transitions. We need to identify all the valid candidates and try each of them one at a time (backtracking). Then, change parameters based on selected candidates and call recursive function. After trying all the candidates, the results need to be combined to find the optimal value we want to return.
Example – knapsack problem
1. If W = 0, it means the knapsack is full. Don’t have enough space for additional items.
2. If i = -1 (starting from last to 0), we processed all the items and nothing is left.
In those cases, the function returns 0
At each item, we have two decisions to make.
1. Take i-th item in the knapsack, then W will become W – weight[i]. It means weight[i] <= W must be true.
2. Skip the i-th item, and W remains the same.
This is the recurrence relation you see.
knapsack(W, i) / \ value[i] + knapsack(W - weight[i], i-1) knapsack(W, i-1) then, optimal choice would be: max(value[i] + knapsack(W - weight[i], i-1), knapsack(W, i-1)) Recurrence relation Base cases: knapsack(0, i) = 0 knapsack(W, -1) = 0 if weight[i] <= W knapsack(W, i) = max(value[i] + knapsack(W - weight[i], i - 1), knapsack(W, i - 1)) else knapsack(W, i) = knapsack(W, i - 1)
Implement a recursive solution
This is python version but should be easy enough to understand.
def knapsack(W, i, weights, values): if W == 0 or i == -1: return 0 if weights[i] <= W: return max( values[i] + knapsack(W - weights[i], i - 1, weights, values), knapsack(W, i - 1, weights, values) ) else: return knapsack(W, i - 1, weights, values)
The naive recursive solution is slow – O(2^n) – and we can improve the performance by using memoization. How do we use it?
We cache the results of subproblems to avoid computing duplicate subproblems.
If there is only one parameter, we can use a 1D array and use the value of the parameter as the index to store the result.
If there are two parameters, then we use a 2D array – matrix.
Please note to initialize the array with default values.
def knapsack(W, i, weights, values, dp): if W == 0 or i == -1: return 0 # check the dp to avoid duplicate computation if dp[W][i] != -1: return dp[W][i] # not computed yet result = None if weights[i] <= W: result = max( values[i] + knapsack(W - weights[i], i - 1, weights, values, dp), knapsack(W, i - 1, weights, values, dp) ) else: result = knapsack(W, i - 1, weights, values, dp) dp[W][i] = result return result
Now you completed a dp problem with top-down approach.
If we already completed the problem, why do we even need to solve it bottom up?
There can be a couple of reasons to do that.
1. You might get stack overflow error due to many recursive calls
2. Bottom up is faster than top-down
In bottom-up approach, you have to think in the reverse direction. You need to start at the leaf/bottom of the tree, solve all the problems starting from the leaves and make the way up. It means we will start with the base case and compute upward to final state. We use one for loop for each parameter and fill all the values in the table by solving smaller problems into big ones.
Now, we need to decide which problem to solve first. It actually depends on how the parameters change in recurrence relation. If the parameter decreases in the subproblem then we start from 0 and go all the way up to the maximum value of the parameter.
Base case if i = 0, return 0 if w = 0, return 0 Bottom up equation dp[w][i] = max(values[i] + dp[w-weight[i]][i], dp[w][i-1])
def knapsack(W, weights, values): dp = [[0 for i in range(0, len(weights) + 1)] for j in range(0, W + 1)] for i in range(1, len(weights) + 1): for w in range(0, W + 1): if weights[i-1] <= w: dp[w][i] = max(dp[w]][i-1], dp[w-weights[i-1]][i-1] + values[i-1]) else: dp[w][i] = dp[w][i-1] return dp[W][len(weights)]
At first glance, bottom-up solution doesn’t look intuitive at all. However, after solving top-down and defining proper equation it totally makes sense. Dynamic programming is hard but you will definitely get better by practicing!