The field of optimization encompasses many different fields of models and algorithms. You can classify the field base on many different critierias: Linear vs nonlinear, convex vs non-convex, continuous vs discrete, etc. Often times, one problem could be solved with multiple different approaches, and this is where different underlying solution philosohpies meet each other, which I find very interesting. In this article, we are going to dive deeper into the difference between dynamic programming and integer programming with the interesting and well-studied problem of knapsack problem.

Backpack

What is Knapsack Problem

Knapsack problem is perhaps widly-known as one of the medium level Leetcode problem. But even before Leetcode, knapsack was covered in the introduction of integer programming classes and perhaps higher level computer science classes, due to its recursive nature and easy problem setup. As an optimization person, knapsack problem is one of the first problems you learn in integer programming class. What’s more, knapsack problem is in fact a NP-hard problem! However, it is one of the NP hard problem we can solve pretty efficiently. The more reason we want to start with it!

Let’s recap what the knapsack problem is. Suppose you are going on a trip, and you have a list of $n$ items you would like to carry with you in your backpack (or knapsack, if you will). Each item has a weight and an utility score. You would like to find the subset of items, the total weight of which is below a threshold $W$ and the total utility score of which is maximized.

From the description, the problem sounds simple enough. One caveat is that each item can be either selected or not selected, but it can not be half selected or fractionally selected. This is a good distinction between the 0-1 knapsack problem where each item is a whole, and the fractional knapsack problem where a fractional part of an item can be selected. In real life, the 0-1 knapsack problem can be seen as deciding if a flashlight should be selected, whereas the fractional problem might be considering how much of the 1 gallon of water should be selected.

Simple Heuristic

The main tradeoff of the knapsack problem is between getting as much utility score as possible while satisfying the constrained weight limit. A natural idea is to calculate a utility/weight ratio for each item, then try to fit the items that has higher utility score per unit weight until hitting the weight threshold. This algorithm is a greedy algorithm, and is actually the solution to the fractional knapsack problem. However, this does not guarantee an optimal solution to the 0-1 knapsack problem, as demonstrated by the following counter example.

Suppoese the specs of the items are given in the following table:

Items 1 2 3
Utility 60 100 120
Weight 10 20 30
Ratio 6 5 4

The weight threshold is 50. Using the greedy heuristic, we will select item 1 and 2, with a total utility of 160 and total weight of 30. Clearly, this is not the optimal solution, which is choosing item 2 and 3, where the total utility is 220, and total weight is exactly 50. The insight is that, ranking with a relative ratio is good, but it does not take into consideration of the absolute weight value and items can not be selected in a fractional manner.

As a matter of fact, one can exploit this weakness and create instances of the knapsack problem which makes the solution of the greedy heuristic arbitrarily bad.

Consider the following knapsack problem with weight threshold $X$

Items 1 2
Utility 1 X-1
Weight 1 X
Ratio 1 1-1/X

The greedy algorithm solution will only select item 1, with total utility 1, rather than the optimal solution of selecting item 2 with utility score $X-1$. As we make $X$ arbitrarily large, the greedy algorithm will perform arbitrarily bad compared to the optimal solution.

Dynamic programming approach

Dynamic programming is based on the idea that, in the optimal solution, a given item $i$ is either in the selected subset or not. This property defines the recursive nature of the algorithm. Let $I$ be the set of items, $u_i$, $w_i$ be the utility and weight of item $i$ respectively, $W$ be the weight threshold, and $knapsack(I,W)$ be the optimal solution of knapsack problem with item set $I$ and weight threshold $W$. In mathematical term, the recursion can be defined as
$$
knapsack(I, W) = \max(knapsack(I\backslash \{i\}, W), u_i + knapsack(I\backslash \{i\}, W-w_i)),
$$
where $I\backslash\{i\}$ represents the set of items $I$ without item $i$.

  • The first term in the $\max$ statement is the case where item $i$ is not in the optimal solution, so the optimal solution is the same as the optimal solution to the knapsack problem with same weight threshold and same items except item $i$.
  • The second term is the case where item $i$ is in the optimal solution, and to choose the rest of the items in the optimal solution, it requires solving a new knapsack problem, where the set of items is $I\backslash\{i\}$, and the weight threshold is $W-w_i$.

With this recursive definition, we can build a table to keep track of the optimal solution of each knapsack problem, starting from 0 items and weight threshold 0 to all items in set $I$ and weight threshold $W$. Then whenever we are computing the solution of $knapsack(I\backslash\{i\}, W-w_i)$, we can just look up the solution in the table.

Here is a python implementation of the dynamic programming algorithm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def knapSack(weight_threshold, weight_list, util_list): 
"""
A Dynamic Programming algorithm that solves the 0-1 Knapsack problem.

Args:
weight_threshold: Weight threshold
weight_list: List of weights for each item in item set I
util_list: utility score of each item i

Returns:
The maximum value that can be put in a knapsack of capacity W
"""
n = len(util_list)
lookup_table = [[0 for x in range(weight_threshold+1)] for x in range(n+1)]

# Build table K[][] in bottom up manner
for i in range(n+1):
for w in range(weight_threshold+1):
if i==0 or w==0:
lookup_table[i][w] = 0
elif weight_list[i-1] <= w:
lookup_table[i][w] = max(
util_list[i-1] + lookup_table[i-1][w-weight_list[i-1]],
lookup_table[i-1][w])
else:
lookup_table[i][w] = lookup_table[i-1][weight_threshold]

return lookup_table[n][weight_threshold]

Note that here $i$ stands for the number of items available to consider from set $I$. For example, $i = 2$ means we can only select from the set of item 1 and 2. As $w$ iterates through all possible weight, and $K[i][w]$ represent the optimal uitlity score of the knapsack problem with weight threshold $j$ and has items $1, 2, \ldots, i$.

Let’s see the DP algorithm in action with the following problem with weight threshold 5:

Items 1 2 3
Utility 6 10 10
Weight 1 2 3
Ratio 6 5 3.3

The table $K$ would look like the table below when the algorithm terminates with the optimal solution:

w=0 w=1 w=2 w=3 w=4 w=5
i=0 0 0 0 0 0 0
i=1 0 6 6 6 6 6
i=2 0 6 10 16 16 16
i=3 0 6 10 16 16 20

where the optimal solution can be found at $K[3][5] = 20$ and item 2 and 3 are selected.

Integer Programming

Another approach is the integer programming approach. Integer programming is a mathematical optimization program in which some or all of the variables are restricted to be integers. Integer programming is NP-complete, so it is not surprising that the knapsack problem, which can be posed as an integer programming problem, is NP-hard as well. When using the Integer programming approach, one usually models the decisions as discete decision variables, and feasible decisions are described by a set of constraints. The resulting model can be solved by special integer programming algorithm to obtain an optimal solution. In this case, the discrete decision is whether an item should be selected or not. We introduce $x_i$ where $i\in I$ to represent the decision of item $i$ being selected or not. If $x_i = 1$, then item $i$ is selected, otherwise $x_i = 0$ and item $i$ is not selected.

The integer programming model can be formulated as the following:
\begin{equation}
\begin{split}
\max \sum\limits_{i\in I} u_ix_i, \
\text{ such that } \sum_{i\in I} w_ix_i \leq W \
\end{split}
\end{equation}

The term $$\max \sum\limits_{i\in I} u_ix_i$$ is the objective function we want to maximize. In this case, the objective is to maximize the total utility score of the selected subset of items.

The inequality $$\sum_{i\in I} w_ix_i \leq W$$ is the knapsack constraint, which enforces the selected items’ total weight does not exceed $W$.

The model is the classic knapsack problem modeled as integer program. To solve the above model, one can utilize any integer programming solvers. A good option is Google ORtools which is an open source tools for writing and solving optimization models.

Below is the python code that uses ORtools and CBC integer programming solver to model and solve the knapsack problem:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
def mip(weight_threshold, weight_list, util_list):
"""
A Integer Programming model that solves the 0-1 Knapsack problem.

Args:
weight_threshold: Weight threshold
weight_list: List of weights for each item in item set I
util_list: utility score of each item i

Returns:
The maximum value that can be put in a knapsack of capacity W
"""
n = len(weight_list)

# initialize the integer programming model with the open source CBC solver
solver = pywraplp.Solver('simple_mip_program',
pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)

# Declare binary variable x for each item from 1 to n
x_dict = {}
for i in range(n):
x_dict[i] = solver.IntVar(0, 1, f'x_{i}')

# Add constraint on total weight of items selected cannot exceed weight threshold
solver.Add(solver.Sum([weight_list[i]*x_dict[i] for i in range(n)]) <= weight_threshold)

# Maximize total utility score
solver.Maximize(solver.Sum([util_list[i]*x_dict[i] for i in range(n)]))

# Solve!
status = solver.Solve()

# Uncomment the section below to print solution details
# if status == pywraplp.Solver.OPTIMAL:
# print('Solution:')
# print('Objective value =', solver.Objective().Value())
# print('Problem solved in %f milliseconds' % solver.wall_time())
# for i in x_dict:
# print(f'{x_dict[i]} = {x_dict[i].solution_value()}')

return solver.Objective().Value()

The standard method to solve an integer programming is called Branch-and-Bound. This is a divide-and-conquer approach which partitions the solution space repetitively until a solution is found and proven to be optimal.

As the name suggests, branch-and-bound consists of two main action:

  • Bound: Given a solution set, get an upper/lower bound estimate of the best solution that can be found in the solution set. For example, one can find an upper bound for a 0-1 knapsack problem by solving its corresponding fractional knapsack problem. Since fractional knapsack problem allows selecting a fraction of an item while 0-1 knapsack problem does not, fractional knapsack problem will always yield a equal or better objective value, which can be seen as an upper bound on the objective of the 0-1 knapsack problem.
  • Branch: While computing bounds on the solution set, we encounter solutions which satisfies all constraints of the problem, but yet is not feasible because the solution values are not integer. In this case we can branch on one of the fractional value variables - splitting the current solution space into two: (in the binary variable case) one that enforces the fractional value variables to be 0, and the other enforces the fractional value variable to be 1. For example, solving the fractional knapsack problem may yield a solution that takes 50% of item 2. One can then branch on item 2’s variable by splitting the solution space to either include item 2 or not include item 2.

Let’s walk through solving the example problem we used with dynamic programming, and hopefully that can clear things up.

We start with the original problem of $knapsack(I, W)$, which in branch-and-bound is referred to as the “master problem”. We first solve a relaxation of the master problem. In integer programming, a relaxation usually refers to linear relaxation, where instead of requiring each binary variable $x_i$ to be binary, we relax this constraint, and enforce each $x_i$ to be between $[0, 1]$. This results in a linear program, hence the name “linear relaxation”. Interestingly, in the knapsack problem case, the linear relaxation is just the fractional knapsack problem. So we can solve it with the heuristic and obtain the optimal solution. For more general integer programs’ relaxations, solving a linear program is required.

After solving the relaxation of $knapsack(I, W)$, we get $(x_1, x_2, x_3) = (1, 1, 0.67)$ and an objective of $22.67$. This solution gives us two pieces of information:

  • A global upper bound of 22.67 on the objective, since the solution space of 0-1 knapsack problem is a subset of the fractional knapsack problem, the best objective of the 0-1 knapsack can not do better than 22.67. As a matter of fact, it cannot be better than 22, since all coefficients in the 0-1 knapsack problem are integers.
  • The only fractional variable is $x_3$, which is perfect for branching on.

Next, we branch on variable $x_3$ by adding a constraint, and we obtain two subproblems:
a) $knapsack(I, W)$ with constraint $x_3 = 0$
b) $knapsack(I, W)$ with constraint $x_3 = 1$
where each subproblem is a new knapsack problem, and we repeat the same steps and solve each subproblem’s relaxation.

For subproblem a), solving its relaxation yields a solution of $(x_1, x_2, x_3) = (1, 1, 0)$ and objective of $16$. This is a feasible solution, which can be used as a lower bound. In other words, now we have found a candidate solution with objective $16$, we are only looking for solutions that has a better objective than 16. Any other solution with a worse objective can be discarded.

For subproblem b), the solution to its relaxation gives the solution of $(x_1, x_2, x_3) = (1, 0.5, 1)$, and an objective of $21$. As the subproblem a) already attained a best objective of 16, the global upper bound can be updated from 22 to 21, since the only chance of finding a better solution is on subproblem b)’s solution space, and the best it can get is 21. But since $x_2$ is now fractional, we further branch on $x_2$, and form problems b1) and b2).

b1) $knapsack(I, W)$ with constraint $x2 = 0, x_3 = 1$
b2) $knapsack(I, W)$ with constraint $x2 = 1, x_3 = 1$

Solving subproblem b1), we obtain solution $(x_1, x_2, x_3) = (1, 0, 1)$ with objective 16. We can safely discard this branch, as the best objective it can attain is 16, which is already found on subproblem a). In this case, since the solution is also integral, we will stop anyway as there are no other solutions left to investigate on problem b1) that can potentially yield a better objective.

Solving subproblem b2), we get $(x_1, x_2, x_3) = (0, 1, 1)$ with objective 20. Since the solution is integral and is larger than the previous lower bound of 16, this is the new lower bound. And as this is the last branch that we need to explore, we can claim this is the optimal solution to the original problem. Another certificate that shows the optimality of the solution is that the global upper bound is 21, and there is no combination of items that will yield a totoal utility score of 21, thus the solution of objective 20 is optimal.

Below is a flowchart that summarizes the steps above:
Branch-and-Bound example

Comparisons

Let’s come back and look at both approaches. Both approaches are using some kind of recursive scheme: dynamic programming exploit the problem structure and builds towards the optimal solution from smaller problems recursively, while integer programming recursively partitions the problem space to smaller trunks, and use estimated bounds to discard uninteresting solution partitions to accelerate the search. Dynamic programming is like a super smart enumeration, and it avoids unnecessary computations by always building upon simpler problem’s optimal solutions. Integer programming does not necessarily work exclusively within the solution set. Rather, it uses the information gained from solving the relaxations to refine the lower and upper bounds, and work towards closing the gap between the bounds. The bounds can help eliminate parts of the solution space that does not contain better solutions, which is why branch-and-bound can be very efficient.
Dynamic programming is great when the problem struture is nice, and the solution set is moderate. Integer programming can be very efficient if you have efficient ways to compute quality lower and upper bounds on the solutions.

Which is better for knapsack problem?

In the end, which one should we use for knapsack problem? To answer this question, we run two sets of experiments:

  1. Run both algorithms on varying sizes of randomly generated knapsack problems to see the impact of size on performance
  2. Run both algorithms on randomly generated knapsack problems with varying tightness of the knapsack constraints to see the effect of constraint tightness on performance.

The impact of problem size on performance

The problem size is an usual factor to consider as larger problems usually take longer time to solve. We generate instances of knapsack problem with number of items ranging from 100 to 1000. The weight and utility scores are randomly generated integers between 0 and 100. The weight threshold is a fixed number of 100. We run both algorithms on each instances 7 times, and record the average solution time in terms of seconds. The results can be summarized in the following plot:
Comparison of both approaches with varying problem sizes
From the above plot, it can be observed that for small to moderate size problems, dynamic programming approach is very competitive against integer programming approach. As the size of problem increase, the solution time of both algorithms increases. With the experiment setup, it seems there is no clear advantage of one algorithm to ther other.

The impact of knapsack constraint tightness on performance

Another factor we look at is the tightness of the knapsack constraint. A tight constraint means most of the items will not be selected, where a loose constraint means most of the items will end up being selected. This directly affects the lookup table size for dynamic programming. We generate random instances of knapsack problems with 100 items which has random weight and utility scores in the range of 0 to 100. We run each algorithm 7 times on each different weight threshold, which ranges from 50 to 950, and record the average solution time in seconds. The results can be seen from the table below:

Comparison of both approaches with varying knapsack constraint tightness

From the plot one can see that, as the weight threshold increase, the solution time increases with the dynamic programming approach, while integer programming seems to not be affected as much. This is because dynamic programming’s table size and number of iteration is directly proportional to the weight threshold, while integer programming is more directly affected by the number of integer variables and constraints, so in this case the constraint tightness does not have a huge impact on the solution time of integer programming.

Conclusion

From the experiments we can see that, for tightly constraint knapsack problems, dynamic programming can be a solid choice, as its efficiency is competitive against integer programming, but does not require a model set up and calling an external solver. Plus dynamic programming has the bonus of the lookup table, which contains optimal solutions of the knapsack problem with different parameters.

On the other hand, the integer programming approach is better if the problem size is large and the knapsack constraint is not very tight. Integer program is not affected as much when the tightness of the knapsack constraint changes. Dynamic programming does have the drawback of not being scale invariant, which means if we multiply the weights and the weight threshold all by the same factor, the solution time will increase as well since the lookup table size is weight threshold * number of items. As a side note, integer program solvers also has more weapons in its arsenal for finding a feasible solution faster and refining the bounds more efficiently. One example is a lot of solvers has builtin heurisitics that searches for feasible solutions, and builtin cuts which tightens the bounds.

Hopefully this answers the question of when should we use which approach. If you are familiar with the column generation approach in the field of optimization, you will know that knapsack problem is often a subproblem that we need to solve efficiently. In the literatures, the knapsack problem is always solved with dynamic programming, which has always made me wonder. With all the analysis, I think the choice is due to the fact that the knapsack problem is usually smaller in size and tightly constrained. Hopefully this attracts more people who is interested in optimization, and provided good insights on how each algorithm works to find the optimal solution.

For readers who are interested in the knapsack problem, Google ORtools has more discussions here, and there is also a great book on the same topic.