16.2 Elements of the greedy strategy
16.2-1¶
Prove that the fractional knapsack problem has the greedy-choice property.
Let $I$ be the following instance of the knapsack problem: Let $n$ be the number of items, let $v_i$ be the value of the $i$th item, let $w_i$ be the weight of the $i$th item and let $W$ be the capacity. Assume the items have been ordered in increasing order by $v_i / w_i$ and that $W \ge w_n$.
Let $s = (s_1, s_2, \ldots, s_n)$ be a solution. The greedy algorithm works by assigning $s_n = \min(w_n, W)$, and then continuing by solving the subproblem
$$I' = (n - 1, \{v_1, v_2, \ldots, v_{n - 1}\}, \{w_1, w_2, \ldots, w_{n - 1}\}, W - w_n)$$
until it either reaches the state $W = 0$ or $n = 0$.
We need to show that this strategy always gives an optimal solution. We prove this by contradiction. Suppose the optimal solution to $I$ is $s_1, s_2, \ldots, s_n$, where $s_n < \min(w_n, W)$. Let $i$ be the smallest number such that $s_i > 0$. By decreasing $s_i$ to $\max(0, W - w_n)$ and increasing $s_n$ by the same amount, we get a better solution. Since this a contradiction the assumption must be false. Hence the problem has the greedy-choice property.
16.2-2¶
Give a dynamic-programming solution to the $0$-$1$ knapsack problem that runs in $O(nW)$ time, where $n$ is the number of items and $W$ is the maximum weight of items that the thief can put in his knapsack.
Suppose we know that a particular item of weight $w$ is in the solution. Then we must solve the subproblem on $n − 1$ items with maximum weight $W − w$. Thus, to take a bottom-up approach we must solve the $0$-$1$ knapsack problem for all items and possible weights smaller than W. We'll build an $n + 1$ by $W + 1$ table of values where the rows are indexed by item and the columns are indexed by total weight. (The first row and column of the table will be a dummy row).
For row $i$ column $j$, we decide whether or not it would be advantageous to include item i in the knapsack by comparing the total value of of a knapsack including items $1$ through $i − 1$ with max weight $j$, and the total value of including items $1$ through $i − 1$ with max weight $j − i.weight$ and also item $i$. To solve the problem, we simply examine the $n$, $W$ entry of the table to determine the maximum value we can achieve. To read off the items we include, start with entry $n$, $W$. In general, proceed as follows: if entry $i$, $j$ equals entry $i - 1$, $j$, don't include item $i$, and examine entry $i - 1$, $j$ next. If entry $i$, $j$ doesn't equal entry $i − 1$, $j$, include item $i$ and examine entry $i − 1$, $j − i$.weight next. See algorithm below for construction of table:
0-1-KNAPSACK(n, W)
Initialize an (n + 1) by (W + 1) table K
for i = 1 to n
K[i, 0] = 0
for j = 1 to W
K[0, j] = 0
for i = 1 to n
for j = 1 to W
if j < i.weight
K[i, j] = K[i - 1, j]
else
K[i, j] = max(K[i - 1, j], K[i - 1, j - i.weight] + i.value)
16.2-3¶
Suppose that in a $0$-$1$ knapsack problem, the order of the items when sorted by increasing weight is the same as their order when sorted by decreasing value. Give an efficient algorithm to find an optimal solution to this variant of the knapsack problem, and argue that your algorithm is correct.
Suppose in an optimal solution we take an item with $v_1$, $w_1$, and drop an item with $v_2$, $w_2$, and $w_1 > w_2$, $v_1 < v_2$, we can substitute $1$ with $2$ and get a better solution. Therefore we should always choose the items with the greatest values.
16.2-4¶
Professor Gekko has always dreamed of inline skating across North Dakota. He plans to cross the state on highway U.S. 2, which runs from Grand Forks, on the eastern border with Minnesota, to Williston, near the western border with Montana. The professor can carry two liters of water, and he can skate $m$ miles before running out of water. (Because North Dakota is relatively flat, the professor does not have to worry about drinking water at a greater rate on uphill sections than on flat or downhill sections.) The professor will start in Grand Forks with two full liters of water. His official North Dakota state map shows all the places along U.S. 2 at which he can refill his water and the distances between these locations.
The professor's goal is to minimize the number of water stops along his route across the state. Give an efficient method by which he can determine which water stops he should make. Prove that your strategy yields an optimal solution, and give its running time.
The greedy solution solves this problem optimally, where we maximize distance we can cover from a particular point such that there still exists a place to get water before we run out. The first stop is at the furthest point from the starting position which is less than or equal to $m$ miles away. The problem exhibits optimal substructure, since once we have chosen a first stopping point $p$, we solve the subproblem assuming we are starting at $p$. Combining these two plans yields an optimal solution for the usual cut-and-paste reasons. Now we must show that this greedy approach in fact yields a first stopping point which is contained in some optimal solution. Let $O$ be any optimal solution which has the professor stop at positions $o_1, o_2, \dots, o_k$. Let $g_1$ denote the furthest stopping point we can reach from the starting point. Then we may replace $o_1$ by $g_2$ to create a modified solution $G$, since $o_2 - o_1 < o_2 - g_1$. In other words, we can actually make it to the positions in $G$ without running out of water. Since $G$ has the same number of stops, we conclude that $g_1$ is contained in some optimal solution. Therefore the greedy strategy works.
16.2-5¶
Describe an efficient algorithm that, given a set $\{x_1, x_2, \ldots, x_n\}$ of points on the real line, determines the smallest set of unit-length closed intervals that contains all of the given points. Argue that your algorithm is correct.
Consider the leftmost interval. It will do no good if it extends any further left than the leftmost point, however, we know that it must contain the leftmost point. So, we know that it's left hand side is exactly the leftmost point.
So, we just remove any point that is within a unit distance of the left most point since they are contained in this single interval. Then, we just repeat until all points are covered. Since at each step there is a clearly optimal choice for where to put the leftmost interval, this final solution is optimal.
16.2-6 $\star$¶
Show how to solve the fractional knapsack problem in $O(n)$ time.
First compute the value of each item, defined to be it's worth divided by its weight. We use a recursive approach as follows, find the item of median value, which can be done in linear time as shown in chapter 9. Then sum the weights of all items whose value exceeds the median and call it $M$. If $M$ exceeds $W$ then we know that the solution to the fractional knapsack problem lies in taking items from among this collection. In other words, we're now solving the fractional knapsack problem on input of size $n / 2$. On the other hand, if the weight doesn't exceed $W$, then we must solve the fractional knapsack problem on the input of $n / 2$ low-value items, with maximum weight $W − M$. Let $T(n)$ denote the runtime of the algorithm. Since we can solve the problem when there is only one item in constant time, the recursion for the runtime is $T(n) = T(n / 2) + cn$ and $T(1) = d$, which gives runtime of $O(n)$.
16.2-7¶
Suppose you are given two sets $A$ and $B$, each containing $n$ positive integers. You can choose to reorder each set however you like. After reordering, let $a_i$ be the $i$th element of set $A$, and let $b_i$ be the $i$th element of set $B$. You then receive a payoff of $\prod_{i = 1}^n a_i^{b_i}$. Give an algorithm that will maximize your payoff. Prove that your algorithm maximizes the payoff, and state its running time.
Since an idential permutation of both sets doesn't affect this product, suppose that $A$ is sorted in ascending order. Then, we will prove that the product is maximized when $B$ is also sorted in ascending order. To see this, suppose not, that is, there is some $i < j$ so that $a_i < a_j$ and $b_i > b_j$. Then, consider only the contribution to the product from the indices $i$ and $j$. That is, $a_i^{b_i}a_j^{b_j}$, then, if we were to swap the order of $b_i$ and $b_j$, we would have that contribution be $a_i^{b_j}a_j^{b_i}$. we can see that this is larger than the previous expression because it differs by a factor of $\left(\frac{a_j}{a_i}\right)^{b_i - b_j}$ which is bigger than one. So, we couldn't of maximized the product with this ordering on $B$.