DP: Knapsack and Chain Multiply

PDF of Eric’s handwritten notes are here.

More Dynamic Programming: Knapsack (with and without repetition) and chain matrix multiplication

Knapsack

Input: n objects with integer weights w_1, \ldots, w_n and integer values v_1, \ldots, v_n, and a total capacity B.

Output: The set S of objects where \sum_{i \in S} w_i \le B that maximizes \sum_{i \in S} v_i.

Version 1: can only include an item at most once.

Attempt 1: Look at the prefix of input

For 0\leq i\leq n, let T(i) be the max value attainable using subset of objects 1,\dots,i. How to write a recurrence for T(i)?

To get optimal solution for T(i), we might add i to T(i-1) or we might take a suboptimal but lighter solution for 1,\dots,i-1 and add i to it. So need optimal solution for given capacity.

Attempt 2: Take prefix of objects & “prefix” of capacity.

For all 0\leq i\leq n and all 0\leq b\leq B, let
T(i,b) = max value attainable from subset of 1, \ldots, i and total weight \leq b.

Recursive step: We’ll decide whether to include item i or not.  There are two cases depending on whether or not item i fits the current capacity.

If w_i \le b, then:
T(i,b) = \max\left\{v_i + T(i-1,b-w_i), T(i-1,b)\right\}.

Else if w_i > b, then:
T(i,b) = T(i-1,b).

Output: Return T(n,B).

Pseudocode:

KNAPSACK(B, w_1, \ldots, w_n, v_1, \ldots, v_n)

  • for b = 0\rightarrow B
    • T(0,b) = 0
  • for i=1\rightarrow n
    • for b=1\rightarrow B
      • if w_i \le b, then:
        T(i,b) = \max\{v_i + T(i-1,b-w_i),T(i-1,b)\}
      • else if w_i > b, then:
        T(i,b)=T(i-1,b)
  • return T(n,B)

Running time: The running time is O(nB) since there are O(nB) subproblems and each considers O(1) subproblems.

Version 2: unlimited supply of each item

Now we don’t need to keep track of which objects are used so far.

Let T(b) = max value attainable using capacity \le b.

Recursive step:
We consider all possibilities for the last item i to add in. If item i fits then we gain value v_i and we take the optimal solution for the remaining capacity of b-w_i. Thus, for 0\leq b\leq B let:
T(b) = \max_{1 \le i \le n} \left\{v_i + T(b-w_i) : w_i\leq b, \right\}.

Output: T(B)

Pseudocode:

KNAPSACK-repeat(B, w_1, \ldots, w_n, v_1, \ldots, v_n)

  • for b=1\rightarrow B
    • T(b) = 0
    • for i=1\rightarrow n
      • if w_i \le b, then:
        T(b) = \max\{T(b),v_i + T(b-w_i)\}
  • return T(B)

Running time: The running time of is again O(nB) since there are O(B) subproblems and each considers adding each of the n elements.

Side note:  We will later see that Knapsack is NP-complete.  We’ve just shown an O(nB) time algorithm for Knapsack.  Have we shown a polynomial-time algorithm for an NP-complete problem?  If we did that would imply that P=NP so what’s the catch since nB is a polynomial in n and B?  The catch is that it is not polynomial in the input size.  The total capacity B is a single number.  How big is the input for this parameter?  It takes O( \log B) bits to express B.  Thus to be polynomial in the input size the running time of our algorithm should be polynomial in n and logB.  So our current algorithm is actually exponential in the input size.

Chain matrix multiply

Example: Given 4 matrices A,B,C,D, want to compute A \times B \times C \times D most efficiently.

Say the size of A is 50 \times 20, B is 20 \times 1, C is 1 \times 10, D is 10 \times 100. There are various ways to compute A \times B \times C \times D. The standard approach is to do (((A \times B) \times C) \times D) but we could also do ((A \times B) \times (C \times D)) or (A \times (B \times (C \times D)). There are many possibilities, which way is best?

For a matrix Y of size a\times b and Z of size b\times c, consider W=Y\times Z which is of size a\times c. Computing an entry of W involves b multiplications and b-1 additions. Since there are ac entries of W, in total there are abc multiplications and a(b-1)c additions. To simplify (and since multiplications take longer than additions) let us define the cost of computing Y\times Z as abc.

Now we can return to our earlier example and compute the cost for a few different ways of computing A \times B \times C \times D:
((A \times B) \times C) \times D has cost (50)(20)(1)+(50)(1)(10)+(50)(10)(100)=51500

(A \times B) \times (C \times D) has cost (50)(20)(1)+(1)(10)(100)+(50)(1)(100)=7000

A \times ((B \times C) \times D) has cost (20)(1)(10)+(20)(10)(100)+(50)(20)(100)=120200

General problem:

Consider matrices A_1, A_2, \dots, A_n where A_i has size m_{i-1} \times m_i, what is the minimum cost of computing A_1\times A_2\times \dots A_n?

Input: m_0, m_1, \dots, m_n

Output: the minimum cost for computing A_1\times A_2\times \dots A_n.

First attempt – Use prefixes:

Let C(i) = min cost for computing A_1 \times \dots \times A_i

A_1 \times \dots \times A_i splits into A_1\times A_j and A_{j+1} \times \dots \times A_i for some j. The min cost for computing A_1\times A_j is T(j), but A_{j+1} \times \dots \times A_i does not correspond to a prefix so we can’t look up its solution in our table. Thus prefixes don’t work, but we have an idea for what to try next: substrings.

Second attempt – Use substrings:

Subproblem definition:
Let C(i,j) = min cost for multiplying A_i \times \dots \times A_j

Recursive solution:
We try all possibilities for the last split. Let k denote the position of the last split so that we are multiplying A_i\times \cdots\times A_k and A_{k+1}\times \cdots\times A_j where i\leq k < j. Thus we have the following recurrence:
C(i,j) = min_{i \le k \le j-1} \{C(i,k) + C(k+1,j) + m_{i-1} m_k m_j\}

Filling the table:
We have to be more careful now about filling the table. The base case are subproblems where i=j which are the entries C(i,i). Thus we begin by filling the diagonal of the matrix C. We then do the case j=i+1 which is the off-diagonal of C. Thus let s=j-i which we can view as the width of the subproblem. Note that to solve a subproblem with width s we need to consider subproblems of width \leq s-1. Thus we fill the table C by s=0\rightarrow n-1. For s=n-1 we have i=1, j=n which is our final solution C(1,n). We can now state the pseudocode for our algorithm.

ChainMultiply(m_0, m_1, \dots, m_n)

  • for i=1 to n: C(i,i) = 0
    • for s=1 to n-1:
      • for i=1 to n-s:
        • Let j = i+s
        • Let C(i,j) = \infty
        • for \ell=i to j-1:
          • Let curr = C(i,\ell) + C(\ell+1,j) + m_{i-1} m_\ell m_j
          • if curr < C(i,j) then let C(i,j) = curr
  • return C(1,n)

Running time: The algorithm has 3 for loops, each of size O(n), and hence the overall running time is O(n^3).