Duality Theory

The Advanced Simplex Pivot Tool is useful for the first exercise.

General Duality Theory

While any LP can be dualized after being transformed to standard form, the meaning of the dual variables will be obscured in the process. In order to compute the dual of a LP with minimal conversion, [Bur] proposes a more general notion of standard form:

\[\begin{split}\begin{aligned} \mathcal{P} \quad \text{maximize} \quad \sum_{j = 1}^n c_j x_j &\\ \text{subject to} \quad \sum_{j = 1}^n a_{ij} x_j &\leq b_i \quad i \in I\\ \sum_{j = 1}^n a_{ij} x_j &= b_i \quad i \in E\\ 0 &\leq x_j \quad j \in R \end{aligned} \quad \iff \quad \begin{aligned} \mathcal{D} \quad \text{minimize} \quad \sum_{i = 1}^m b_i y_i &\\ \text{subject to} \quad \sum_{i = 1}^m a_{ij} y_i &\geq c_j \quad j \in R\\ \sum_{i = 1}^m a_{ij} y_i &= c_j \quad j \in F\\ 0 &\leq y_i \quad i \in I \end{aligned}\end{split}\]

where \(I \cap E = \emptyset\), \(I \cup E = \{1, 2, \ldots, m\}\), \(R \subset \{1, 2, \ldots, n\}\), and \(F = \{1, 2, \ldots, n\} \setminus R\). This formulation can be combined with the rules in Table 5.1 to derive the dual of any LP.

Duality Gap and Complementary Slackness

[Wil] presents an alternative proof to the Strong Duality Theorem using Farkas’ Lemma. As illustrated in [Rad], the Weak Duality Theorem asserts that for a primal linear program \(\mathcal{P}\) and its dual \(\mathcal{D}\), there are only the following possibilities:

  • \(\mathcal{P}\) infeasible, \(\mathcal{D}\) infeasible (infinite gap).

  • \(\mathcal{P}\) infeasible, \(\mathcal{D}\) unbounded (no gap).

  • \(\mathcal{P}\) unbounded, \(\mathcal{D}\) infeasible (no gap).

  • \(\mathcal{P}\) feasible, \(\mathcal{D}\) feasible.

The Strong Duality Theorem states that if either \(\mathcal{P}\) or \(\mathcal{D}\) has a finite optimal value, there is no duality gap.

Lastly, given the primal optimal solution, the Complementary Slackness Theorem enables one to derive the dual optimal solution.

The Dual Simplex Method

The book is vague on how to select the entering and leaving variable. See [Ell] for a concise yet clear explanation.

Exercise 5.1

Rewriting the LP to standard form yields

\[\begin{split}\begin{aligned} \mathcal{P} \quad \text{maximize} \quad x_1 - 2 x_2 &\\ \text{subject to} \quad -x_1 - 2 x_2 + x_3 - x_4 &\leq 0\\ 4 x_1 + 3 x_2 + 4 x_3 - 2 x_4 &\leq 3\\ -x_1 - x_2 + 2 x_3 + x_4 &= 1\\ 0 &\leq x_j \quad j = 2, 3. \end{aligned}\end{split}\]

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad 3 y_2 + y_3 &\\ \text{subject to} \quad -y_1 + 4 y_2 - y_3 &= 1\\ -2 y_1 + 3 y_2 - y_3 &\geq -2\\ y_1 + 4 y_2 + 2 y_3 &\geq 0\\ -y_1 - 2 y_2 + y_3 &= 0\\ 0 &\leq y_i \quad i = 1, 2 \end{aligned} \quad = \quad \begin{aligned} \mathcal{D} \quad -\text{maximize} \quad -3 y_2 - y_3 &\\ \text{subject to} \quad y_1 - 4 y_2 + y_3 &= -1\\ 2 y_1 - 3 y_2 + y_3 &\leq 2\\ -y_1 - 4 y_2 - 2 y_3 &\leq 0\\ y_1 + 2 y_2 - y_3 &= 0\\ 0 &\leq y_i \quad i = 1, 2. \end{aligned}\end{split}\]

Observe that the dual to \(\mathcal{D}\) is indeed \(\mathcal{P}\):

\[\begin{split}\begin{aligned} \mathcal{P} \quad -\text{minimize} \quad -x_1 + 2 x_2 &\\ \text{subject to} \quad x_1 + 2 x_2 - x_3 + x_4 &\geq 0\\ -4 x_1 - 3 x_2 - 4 x_3 + 2 x_4 &\geq -3\\ x_1 + x_2 - 2 x_3 - x_4 &= -1\\ 0 &\leq x_j \quad j = 2, 3 \end{aligned} \quad = \quad \begin{aligned} \mathcal{P} \quad \text{maximize} \quad x_1 - 2 x_2 &\\ \text{subject to} \quad -x_1 - 2 x_2 + x_3 - x_4 &\leq 0\\ 4 x_1 + 3 x_2 + 4 x_3 - 2 x_4 &\leq 3\\ -x_1 - x_2 + 2 x_3 + x_4 &= 1\\ 0 &\leq x_j \quad j = 2, 3. \end{aligned}\end{split}\]

Exercise 5.2

The optimal primal solution for Exercise 2.9 is

\[\mathbf{x}^* = \left( x_1, x_2, x_3 \right) = \left( \frac{3}{2}, \frac{5}{2}, 0 \right) \quad \land \quad \mathbf{w}^* = \left( w_1, w_2, w_3 \right) = \left( 0, 0, \frac{1}{2} \right).\]

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad 5 y_1 + 4 y_2 + 7 y_3 &\\ \text{subject to} \quad y_2 + y_3 &\geq 2\\ 2 y_1 + y_2 + 2 y_3 &\geq 3\\ 3 y_1 + 2 y_2 + 3 y_3 &\geq 4\\ 0 &\leq y_i \quad i = 1, 2, 3 \end{aligned}\end{split}\]

From the Complementary Slackness Theorem 5.3, the optimal dual solution needs to satisfy (5.7)

\[\begin{split}x_1 z_1 &= x_1 (y_2 + y_3 - 2) = 0\\ x_2 z_2 &= x_2 (2 y_1 + y_2 + 2 y_3 - 3) = 0\\ x_3 z_3 &= x_3 (3 y_1 + 2 y_2 + 3 y_3 - 4) = 0\\ w_1 y_1 &= 0\\ w_2 y_2 &= 0\\ w_3 y_3 &= 0.\end{split}\]

These constraints yield the following linear system

\[\begin{split}\begin{bmatrix} 0 & 3 & 3\\ 10 & 5 & 10\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} y_1\\ y_2\\ y_3 \end{bmatrix} = \begin{bmatrix} 6\\ 15\\ 0\end{bmatrix}.\end{split}\]

Reducing the associated augmented matrix to echelon form gives

\[\mathbf{y} = \left( y_1, y_2, y_3 \right) = \left( \frac{1}{2}, 2, 0 \right).\]

Since \(\mathbf{y}\) is feasible, the Strong Duality Theorem guarantees \(\mathbf{y}^* = \mathbf{y}\).

Exercise 5.3

The optimal primal solution for Exercise 2.1 is

\[\mathbf{x}^* = \left( x_1, x_2, x_3, x_4 \right) = \left( 2, 0, 1, 0 \right) \quad \land \quad \mathbf{w}^* = \left( w_1, w_2 \right) = \left( 0, 0 \right).\]

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad 5 y_1 + 3 y_2 &\\ \text{subject to} \quad 2 y_1 + y_2 &\geq 6\\ y_1 + 3 y_2 &\geq 8\\ y_1 + y_2 &\geq 5\\ 3 y_1 + 2 y_2 &\geq 9\\ 0 &\leq y_i \quad i = 1, 2 \end{aligned}\end{split}\]

From the Complementary Slackness Theorem 5.3, the optimal dual solution needs to satisfy (5.7)

\[\begin{split}x_1 z_1 &= x_1 (2 y_1 + y_2 - 6) = 0\\ x_2 z_2 &= x_2 (y_1 + 3 y_2 - 8) = 0\\ x_3 z_3 &= x_3 (y_1 + y_2 - 5) = 0\\ x_4 z_4 &= x_4 (3 y_1 + 2 y_2 - 9) = 0\\ w_1 y_1 &= 0\\ w_2 y_2 &= 0.\end{split}\]

These constraints yield the following linear system

\[\begin{split}\begin{bmatrix} 4 & 2\\ 1 & 1 \end{bmatrix} \begin{bmatrix} y_1\\ y_2 \end{bmatrix} = \begin{bmatrix} 12\\ 5 \end{bmatrix}.\end{split}\]

Reducing the associated augmented matrix to echelon form gives

\[\mathbf{y} = \left( y_1, y_2 \right) = \left( 1, 4 \right).\]

Since \(\mathbf{y}\) is feasible, the Strong Duality Theorem guarantees \(\mathbf{y}^* = \mathbf{y}\).

Exercise 5.4

The optimal primal solution for Exercise 2.2 is

\[\mathbf{x}^* = \left( x_1, x_2 \right) = \left( 1, 0 \right) \quad \land \quad \mathbf{w}^* = \left( w_1, w_2, w_3, w_4 \right) = \left( 2, 1, 1, 0 \right).\]

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad 4 y_1 + 3 y_2 + 5 y_3 + y_4 &\\ \text{subject to} \quad 2 y_1 + 2 y_2 + 4 y_3 + y_4 &\geq 2\\ y_1 + 3 y_2 + y_3 + 5 y_4 &\geq 1\\ 0 &\leq y_i \quad i = 1, 2, 3, 4 \end{aligned}\end{split}\]

From the Complementary Slackness Theorem 5.3, the optimal dual solution needs to satisfy (5.7)

\[\begin{split}x_1 z_1 &= x_1 (2 y_1 + 2 y_2 + 4 y_3 + y_4 - 2) = 0\\ x_2 z_2 &= x_2 (y_1 + 3 y_2 + y_3 + 5 y_4 - 1) = 0\\ w_1 y_1 &= 0\\ w_2 y_2 &= 0\\ w_3 y_3 &= 0\\ w_4 y_4 &= 0.\end{split}\]

These constraints yield the following linear system

\[\begin{split}\begin{bmatrix} 2 & 2 & 4 & 1\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} y_1\\ y_2\\ y_3\\ y_4 \end{bmatrix} = \begin{bmatrix} 2\\ 0\\ 0\\ 0 \end{bmatrix}.\end{split}\]

Reducing the associated augmented matrix to echelon form gives

\[\mathbf{y} = \left( y_1, y_2, y_3, y_4 \right) = \left( 0, 0, 0, 2 \right).\]

Since \(\mathbf{y}\) is feasible, the Strong Duality Theorem guarantees \(\mathbf{y}^* = \mathbf{y}\).

Exercise 5.5

(a)

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad 6 y_1 + 1.5 y_2 + 4 y_3 &\\ \text{subject to} \quad 2 y_1 - 2 y_2 + 3 y_3 &\geq 2\\ 3 y_1 + 4 y_2 + 2 y_3 &\geq 8\\ 3 y_2 - 2 y_3 &\geq -1\\ 6 y_1 - 4 y_3 &\geq -2\\ 0 &\leq y_i \quad i = 1, 2, 3. \end{aligned}\end{split}\]

(b)

\[w_1, x_2, w_3, x_4 \in \mathcal{N} \quad \land \quad x_1, w_2, x_3 \in \mathcal{B}\]

(c)

The current primal solution is feasible but degenerate:

\[x_1 = 3, \quad x_2 = 0, \quad x_3 = 2.5, \quad x_4 = 0, \quad w_1 = 0, \quad w_2 = 0, \quad w_3 = 0.\]

(d)

The corresponding dual dictionary is

\[\begin{split}-\xi &= -3.5 - 3 z_1 - 2.5 z_3\\ y_1 &= 0.25 + 0.5 z_1 - 1.25 y_2 + 0.75 z_3\\ z_2 &= -6.25 + 1.5 z_1 + 3.25 y_2 + 1.25 z_3\\ y_3 &= 0.5 + 1.5 y_2 - 0.5 z_3\\ z_4 &= 1.5 + 3 z_1 - 13.5 y_2 + 6.5 z_3.\end{split}\]

(e)

The current dual solution is not feasible:

\[y_1 = 0.25, \quad y_2 = 0, \quad y_3 = 0.5, \quad z_1 = 0, \quad z_2 = -6.25, \quad z_3 = 0, \quad z_4 = 1.5.\]

(f)

Even though the primal and dual solutions satisfy (5.7), the Complementary Slackness Theorem 5.3 requires both solutions to be feasible. Therefore, the zero duality gap does not imply an optimal solution.

(g)

The current primal solution is not optimal because the set of entering variables is not empty.

(h)

Choose \(x_2\) and \(w_2\) as the entering and leaving variable respectively where

\[x_2 = \min \left\{ x_1, w_2, x_3 \right\}_{\geq 0} = \min \left\{ 2, 0, 2 \right\}_{\geq 0}.\]

Pivot the primal dictionary to

\[\begin{split}\zeta &= \frac{1}{13} \left( 45.5 + 28 w_1 - 25 w_2 - 44 w_3 + 318 x_4 \right)\\ x_1 &= \frac{1}{13} \left( 39 - 14 w_1 + 6 w_2 + 9 w_3 - 120 x_4 \right)\\ x_2 &= \frac{1}{13} \left( 5 w_1 - 4 w_2 - 6 w_3 + 54 x_4 \right)\\ x_3 &= \frac{1}{13} \left( 32.5 - 16 w_1 + 5 w_2 + 14 w_3 - 152 x_4 \right).\end{split}\]

Even though the objective function and primal solution stayed the same, the pivot is not degenerate by definition because each of the candidate leaving variable’s ratio is finite.

Exercise 5.6

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad 6 y_1 - y_2 + 6 y_3 + y_4 + 6 y_5 - 3 y_6 &\\ \text{subject to} \quad -2 y_1 - 3 y_2 + 9 y_3 + y_4 + 7 y_5 - 5 y_6 &\geq -1\\ 7 y_1 + y_2 - 4 y_3 - y_4 - 3 y_5 + 2 y_6 &\geq -2\\ 0 &\leq y_i \quad i = 1, 2, \ldots, 6. \end{aligned}\end{split}\]

Observe that the primal basic solution is infeasible while the dual solution has a basic feasible solution. The dual simplex method can be used to solve the primal linear program:

\[\begin{split}\begin{aligned} \text{maximize} \quad \zeta &= -x_1 - 2 x_2\\ \text{subject to} \quad w_1 &= 6 + 2 x_1 - 7 x_2\\ w_2 &= -1 + 3 x_1 - x_2\\ w_3 &= 6 - 9 x_1 + 4 x_2\\ w_4 &= 1 - x_1 + x_2\\ w_5 &= 6 - 7 x_1 + 3 x_2\\ w_6 &= -3 + 5 x_1 - 2 x_2\\ x_i &\geq 0 \quad i = 1, 2\\ w_i &\geq 0 \quad i = 1, 2, \ldots, 6. \end{aligned}\end{split}\]

Initial infeasible solution:

\[x_1 = 0, \quad x_2 = 0, \quad w_1 = 6, \quad w_2 = -1, \quad w_3 = 6, \quad w_4 = 1, \quad w_5 = 6, \quad w_6 = -3.\]

Choose the leaving variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{B}} \left\{ b_i \right\}_{< 0} = \argmin_{\mathcal{B}} \left\{ w_2 = -1, w_6 = -3 \right\}_{< 0} = w_6.\]

Choose the entering variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{N}} \left\{ \frac{\left\vert c_j \right\vert}{a_{6j}} \right\}_{a_{6j} > 0} = \argmin_{\mathcal{N}} \left\{ x_1 = \frac{1}{5} \right\}_{\geq 0} = x_1.\]

Pivot the dictionary to

\[\begin{split}\begin{aligned} \text{maximize} \quad \zeta &= \frac{1}{5} \left( -3 - w_6 - 12 x_2 \right)\\ \text{subject to} \quad w_1 &= \frac{1}{5} \left( 36 + 2 w_6 - 31 x_2 \right)\\ w_2 &= \frac{1}{5} \left( 4 + 3 w_6 + x_2 \right)\\ w_3 &= \frac{1}{5} \left( 3 - 9 w_6 + 2 x_2 \right)\\ w_4 &= \frac{1}{5} \left( 2 - w_6 + 3 x_2 \right)\\ w_5 &= \frac{1}{5} \left( 9 - 7 w_6 + x_2 \right)\\ x_1 &= \frac{1}{5} \left( 3 + w_6 + 2 x_2 \right)\\ x_i &\geq 0 \quad i = 1, 2\\ w_i &\geq 0 \quad i = 1, 2, \ldots, 6. \end{aligned}\end{split}\]

Update the current dictionary’s solution to

\[x_1 = \frac{3}{5}, \quad x_2 = 0, \quad w_1 = \frac{36}{5}, \quad w_2 = \frac{4}{5}, \quad w_3 = \frac{3}{5}, \quad w_4 = \frac{2}{5}, \quad w_5 = \frac{9}{5}, \quad w_6 = 0.\]

Since both dictionaries are feasible and their corresponding set of entering variables is empty, both dictionaries are optimal.

Exercise 5.7

To make the initial dual dictionary feasible, replace \(\zeta\) with \(\eta = -x_1 - x_2 - x_3\) so that

\[\begin{split}\begin{aligned} \text{maximize} \quad \eta &= -x_1 - x_2 - x_3\\ \text{subject to} \quad w_1 &= -2 + x_1 + x_2 + x_3\\ w_2 &= 1 - 2 x_1 + x_2 - x_3\\ x_i &\geq 0 \quad i = 1, 2, 3\\ w_i &\geq 0 \quad i = 1, 2. \end{aligned}\end{split}\]

Initial infeasible solution:

\[x_1 = 0, \quad x_2 = 0, \quad x_3 = 0, \quad w_1 = -2, \quad w_2 = 1.\]

Choose the leaving variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{B}} \left\{ b_i \right\}_{< 0} = \argmin_{\mathcal{B}} \left\{ w_1 = -2 \right\}_{< 0} = w_1.\]

Under Bland’s rule, choose the entering variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{N}} \left\{ \frac{\left\vert c_j \right\vert}{a_{1j}} \right\}_{a_{1j} > 0} = \argmin_{\mathcal{N}} \left\{ x_1 = \frac{1}{1}, x_2 = \frac{1}{1}, x_3 = \frac{1}{1} \right\}_{\geq 0} = x_1.\]

Pivot the dictionary to

\[\begin{split}\begin{aligned} \text{maximize} \quad \eta &= -2 - w_1\\ \text{subject to} \quad x_1 &= 2 + w_1 - x_2 - x_3\\ w_2 &= -3 - 2 w_1 + 3 x_2 + x_3\\ x_i &\geq 0 \quad i = 1, 2, 3\\ w_i &\geq 0 \quad i = 1, 2. \end{aligned}\end{split}\]

Update the current dictionary’s solution to

\[x_1 = 2, \quad x_2 = 0, \quad x_3 = 0, \quad w_1 = 0, \quad w_2 = -3.\]

Choose the leaving variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{B}} \left\{ b_i \right\}_{< 0} = \argmin_{\mathcal{B}} \left\{ w_2 = -3 \right\}_{< 0} = w_2.\]

Under Bland’s rule, choose the entering variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{N}} \left\{ \frac{\left\vert c_j \right\vert}{a_{2j}} \right\}_{a_{2j} > 0} = \argmin_{\mathcal{N}} \left\{ x_2 = \frac{0}{3}, x_3 = \frac{0}{1} \right\}_{\geq 0} = x_2.\]

Pivot the dictionary to

\[\begin{split}\begin{aligned} \text{maximize} \quad \eta &= -2 - w_1\\ \text{subject to} \quad x_1 &= \frac{1}{3} \left( 3 + w_1 - w_2 - 2 x_3 \right)\\ x_2 &= \frac{1}{3} \left( 3 + 2 w_1 + w_2 - x_3 \right)\\ x_i &\geq 0 \quad i = 1, 2, 3\\ w_i &\geq 0 \quad i = 1, 2. \end{aligned}\end{split}\]

Update the current dictionary’s solution to

\[x_1 = 1, \quad x_2 = 1, \quad x_3 = 0, \quad w_1 = 0, \quad w_2 = 0.\]

Since both dictionaries are feasible and their corresponding set of entering variables is empty, both dictionaries are optimal.

The starting dictionary for Phase II is

\[\begin{split}\begin{aligned} \text{maximize} \quad \zeta &= \frac{1}{3} \left( -12 - 10 w_1 - 8 w_2 + 2 x_3 \right)\\ \text{subject to} \quad x_1 &= \frac{1}{3} \left( 3 + w_1 - w_2 - 2 x_3 \right)\\ x_2 &= \frac{1}{3} \left( 3 + 2 w_1 + w_2 - x_3 \right)\\ x_i &\geq 0 \quad i = 1, 2, 3\\ w_i &\geq 0 \quad i = 1, 2. \end{aligned}\end{split}\]

Choose \(x_3\) and \(x_1\) as the entering and leaving variable respectively where

\[x_3 = \min \left\{ x_1, x_2 \right\}_{\geq 0} = \min \left\{ \frac{3}{2}, 3 \right\}_{\geq 0}.\]

Pivot the dictionary to

\[\begin{split}\begin{aligned} \text{maximize} \quad \zeta &= -3 - x_1 - 3 w_1 - 3 w_2\\ \text{subject to} \quad x_2 &= \frac{1}{2} \left( 1 + x_1 + w_1 + w_2 \right)\\ x_3 &= \frac{1}{2} \left( 3 - 3 x_1 + w_1 - w_2 \right)\\ x_i &\geq 0 \quad i = 1, 2, 3\\ w_i &\geq 0 \quad i = 1, 2. \end{aligned}\end{split}\]

Update the current dictionary’s feasible solution to

\[x_1 = 0, \quad x_2 = \frac{1}{2}, \quad x_3 = \frac{3}{2}, \quad w_1 = 0, \quad w_2 = 0.\]

Since the set of primal entering variables is empty, the dictionary is optimal.

Exercise 5.8

The initial dual dictionary is already feasible, so no need to modify the primal objective function.

Choose the leaving variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{B}} \left\{ b_i \right\}_{< 0} = \argmin_{\mathcal{B}} \left\{ w_1 = -5 \right\}_{< 0} = w_1.\]

Choose the entering variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{N}} \left\{ \frac{\left\vert c_j \right\vert}{a_{1j}} \right\}_{a_{1j} > 0} = \argmin_{\mathcal{N}} \left\{ x_2 = \frac{3}{5} \right\}_{\geq 0} = x_2.\]

Pivot the dictionary to

\[\begin{split}\begin{aligned} \text{maximize} \quad \zeta &= \frac{1}{5} \left( -15 - 11 x_1 - 3 w_1 - 8 w_3 \right)\\ \text{subject to} \quad x_2 &= \frac{1}{5} \left( 5 + 2 x_1 + w_1 + x_3 \right)\\ w_2 &= \frac{1}{5} \left( 25 - 8 x_1 + w_1 - 9 x_3 \right)\\ x_i &\geq 0 \quad i = 1, 2, 3\\ w_i &\geq 0 \quad i = 1, 2. \end{aligned}\end{split}\]

Update the current dictionary’s feasible solution to

\[x_1 = 0, \quad x_2 = 1, \quad x_3 = 0, \quad w_1 = 0, \quad w_2 = 5.\]

Since both dictionaries are feasible and their corresponding set of entering variables is empty, both dictionaries are optimal.

Exercise 5.9

To make the initial dual dictionary feasible, replace \(\zeta\) with \(\eta = -x_1 - x_2\) so that

\[\begin{split}\begin{aligned} \text{maximize} \quad \eta &= -x_1 - x_2\\ \text{subject to} \quad w_1 &= -3 + x_1 + x_2\\ w_2 &= -1 + x_1 - x_2\\ w_3 &= 2 - x_1 - 2 x_2\\ x_i &\geq 0 \quad i = 1, 2\\ w_i &\geq 0 \quad i = 1, 2, 3. \end{aligned}\end{split}\]

Initial infeasible solution:

\[x_1 = 0, \quad x_2 = 0, \quad w_1 = -3, \quad w_2 = -1, \quad w_3 = 2.\]

Choose the leaving variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{B}} \left\{ b_i \right\}_{< 0} = \argmin_{\mathcal{B}} \left\{ w_1 = -3, w_2 = -1 \right\}_{< 0} = w_1.\]

Under Bland’s rule, choose the entering variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{N}} \left\{ \frac{\left\vert c_j \right\vert}{a_{1j}} \right\}_{a_{1j} > 0} = \argmin_{\mathcal{N}} \left\{ x_1 = \frac{1}{1}, x_2 = \frac{1}{1} \right\}_{\geq 0} = x_1.\]

Pivot the dictionary to

\[\begin{split}\begin{aligned} \text{maximize} \quad \eta &= -3 - w_1\\ \text{subject to} \quad x_1 &= 3 + w_1 - x_2\\ w_2 &= 2 + w_1 - 2 x_2\\ w_3 &= -1 - w_1 - x_2\\ x_i &\geq 0 \quad i = 1, 2\\ w_i &\geq 0 \quad i = 1, 2, 3. \end{aligned}\end{split}\]

Update the current dictionary’s solution to

\[x_1 = 3, \quad x_2 = 0, \quad w_1 = 0, \quad w_2 = 2, \quad w_3 = -1.\]

Choose the leaving variable that satisfies

\[\DeclareMathOperator*{\argmin}{arg\,min} \argmin_{\mathcal{B}} \left\{ b_i \right\}_{< 0} = \argmin_{\mathcal{B}} \left\{ w_3 = -1 \right\}_{< 0} = w_3.\]

Observe that the set of entering variables

\[\left\{ \frac{\left\vert c_j \right\vert}{a_{3j}} \right\}_{a_{3j} > 0} = \emptyset.\]

Examining the dual dictionary reveals that the dual is feasible but unbounded. Thus, the primal is infeasible according to the Weak Duality Theorem. Note that one does not have to examine the dual to confirm this: it is evident that \(w_3 < 0\) whenever the other nonbasic variables are nonzero.

Exercise 5.10 and Exercise 5.11

The previous exercises were solved using the proposed tool.

Exercise 5.12

Recall the expression \(\infty - \infty\) is undefined even for an affinely extended real number system. This means \(f(x, y) = x - y\) is not a continuous function assuming \(x, y \in [0, \infty]\), which means one cannot apply von Neumann’s Minimax Theorem.

Observe that

\[\begin{split}\min_{y \geq 0} x - y = \begin{cases} \text{undefined} & \text{if } x = \infty\\ -\infty & \text{otherwise} \end{cases}\end{split}\]

and

\[\begin{split}\max_{x \geq 0} x - y = \begin{cases} \text{undefined} & \text{if } y = \infty\\ \infty & \text{otherwise.} \end{cases}\end{split}\]

The maximin and minimax solutions are respectively

\[\begin{split}\max_{x \geq 0} \min_{y \geq 0} x - y = \begin{cases} \text{undefined} & \text{if } x = \infty\\ -\infty & \text{otherwise} \end{cases}\end{split}\]

and

\[\begin{split}\min_{y \geq 0} \max_{x \geq 0} x - y = \begin{cases} \text{undefined} & \text{if } y = \infty\\ \infty & \text{otherwise.} \end{cases}\end{split}\]

Exercise 5.13

The proposed process matches the transformations performed in Exercise 5.1, but the logic is incorrect.

The Weak Duality Theorem ensures the validity of the first inequality. The second inequality is reformulating the dualized problem into a maximization, so the dualized problem should be thought of as switching places with the primal problem as shown in Figure 5.1. The same reasoning applies to the third and fourth inequalities. One way to visualize this is to treat it as a graph with four vertices and four edges:

\[(\mathcal{P}, \leq, \mathcal{D}) \quad \land \quad (\mathcal{D}, \equiv, \mathcal{D}_{\max}) \quad \land \quad (\mathcal{D}_{\max}, \leq, \mathcal{P}_{\min}) \quad \land \quad (\mathcal{P}_{\min}, \equiv, \mathcal{P}).\]

This process is merely illustrating the dual of the dual is the primal. To turn all inequalities into equalities, the primal and dual needs to be equal. This restricts the primal dictionary’s matrix form to be antisymmetric because the dual dictionary is the negative transpose of the primal dictionary.

Exercise 5.14

(a)

Observe that \(x_j = 0\) and \(b_i = 0\) is a basic feasible solution. Furthermore, since \(x_j\) is bounded above by \(u_j\), the LP is bounded. Note that \(b_j\)’s lack of an upper bound does not matter because the simplex method non-decreasingly advances the objective function. Applying the fundamental theorem of linear programming shows that this problem always has an optimal solution.

(b)

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad \sum_{i = 1}^m b_i y_i + \sum_{j = 1}^n u_j y_{m + j} &\\ \text{subject to} \quad y_{m + j} + \sum_{i = 1}^m y_i a_{ij} &\geq c_j \quad j = 1, 2, \ldots, n\\ 0 &\leq y_k \quad k = 1, 2, \ldots, m + n. \end{aligned}\end{split}\]

Given the optimal values for \(b^*_i\), \(b^*_i = \sum_{j = 1}^n a_{ij} x^*_j\). Assuming the market is in equilibrium, \(c_j = \sum_{i = 1}^m a_{ij} p_i\).

Combining the foregoing assumptions with the Strong Duality Theorem,

\[\begin{split}\sum_{j = 1}^n c_j x^*_j &= \sum_{i = 1}^m b^*_i y_i + \sum_{j = 1}^n u_j y_{m + j}\\ \sum_{j = 1}^n \sum_{i = 1}^m x^*_j a_{ij} p_i &= \sum_{i = 1}^m \sum_{j = 1}^n x^*_j a_{ij} y_i + \sum_{j = 1}^n u_j y_{m + j}.\end{split}\]

Therefore, \(y^*_i = p_i\) and \(y^*_{m + j} = 0\).

Exercise 5.15

The optimal primal solution for Exercise 2.19 is

\[\begin{split}\begin{array}{ccc} & w^*_0 = 0 &\\ j < k & x^*_j = 1 & w^*_j = 0\\ j = k & x^*_k = \frac{\beta - \sum_{j = 1}^{k - 1} q_j}{q_k} & w^*_k = 1 - \frac{\beta - \sum_{j = 1}^{k - 1} q_j}{q_k}\\ j > k & x^*_j = 0 & w^*_j = 1. \end{array}\end{split}\]

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad \beta y_0 + \sum_{j = 1}^n y_j &\\ \text{subject to} \quad q_j y_0 + y_j &\geq p_j\\ 0 &\leq y_0\\ 0 &\leq y_j \quad j = 1, \ldots, n. \end{aligned}\end{split}\]

From the Complementary Slackness Theorem 5.3, the optimal dual solution needs to satisfy (5.7)

\[\begin{split}\begin{array}{ccc} & w_0 y_0 = 0 &\\ j < k & x_j z_j = (1) \left( q_j y_0 + y_j - p_j \right) = 0 & w_j y_j = (0) y_j = 0\\ j = k & x_k z_k = \left( \frac{\beta - \sum_{j = 1}^{k - 1} q_j}{q_k} \right) \left( q_k y_0 + y_k - p_k \right) = 0 & w_k y_k = \left( 1 - \frac{\beta - \sum_{j = 1}^{k - 1} q_j}{q_k} \right) y_k = 0\\ j > k & x_j z_j = (0) \left( q_j y_0 + y_j - p_j \right) = 0 & w_j y_j = (1) y_j = 0. \end{array}\end{split}\]

The constraints imposed by the primal slack variables \(w_{j \geq k}\) forces \(y_{j \geq k} = 0\). The result for \(y_k\) in turn restricts \(y_0 = \frac{p_k}{q_k}\). Consequently,

\[\begin{split}q_j y_0 + y_j - p_j &= 0\\ y_j &= p_j - q_j \left( \frac{p_k}{q_k} \right)\\ &= q_j \left( \frac{p_j}{q_j} - \frac{p_k}{q_k} \right)\end{split}\]

for \(j < k\). Since \(\frac{p_1}{q_1} > \cdots > \frac{p_n}{q_n}\), the proposed \(\mathbf{y}\) is feasible and is guaranteed to be optimal by the Strong Duality Theorem.

Exercise 5.16

The dual of \(\mathcal{P}\) is

\[\begin{split}\begin{aligned} \mathcal{D} \quad \text{minimize} \quad \sum_{i = 1}^m -b_i y_i &\\ \text{subject to} \quad \sum_{i = 1}^m -a_{ij} y_i &\geq -c_j \quad j = 1, \ldots, n\\ 0 &\leq y_i \quad i = 1, \ldots, m \end{aligned} \quad \iff \quad \begin{aligned} \mathcal{D} \quad -\text{maximize} \quad \sum_{i = 1}^m b_i y_i &\\ \text{subject to} \quad \sum_{i = 1}^m a_{ij} y_i &\leq c_j \quad j = 1, \ldots, n\\ 0 &\leq y_i \quad i = 1, \ldots, m. \end{aligned}\end{split}\]

Recall that

\[b_i = \text{nutrient} \quad \land \quad a_{ij} = \frac{\text{nutrient}}{\text{food}} \quad \land \quad x_j = \text{quantity of food} \quad \land \quad c_j = \frac{\text{price}}{\text{food}}.\]

Notice that \(b_i\) could be interpreted as the importance of a particular nutrient whereas \(c_j\) prevents overpaying for the nutrients. From inspection, the dual represents a person that wants to maximize the amount of money spent per nutrient. Hence \(y_j\) represents \(\frac{\text{price}}{\text{nutrient}}\).

Exercise 5.17

(1)

\[\begin{split}\nabla \pi = \begin{bmatrix} \partial \pi / \partial x\\ \partial \pi / \partial y\\ \end{bmatrix} = \begin{bmatrix} f'(x) - y\\ -x + g'(y)\\ \end{bmatrix}\end{split}\]

A vanishing gradient implies \(\nabla \pi = \boldsymbol{0}\); consequently,

\[\begin{split}f'(x) - y &= -x + g'(y)\\ f'(x) + x &= g'(y) + y\\ \phi(x) &= \varphi(y).\end{split}\]

Recall that a strongly convex function is also strictly convex. Since \(f\) is strongly concave and \(g\) is strongly convex, \(f' \in \mathbb{R}\) is monotonically decreasing and \(g' \in \mathbb{R}\) is monotonically increasing. The foregoing implies \(\phi(x)\) and \(\varphi(y)\) will have at most one intersection.

(2)

\[\begin{split}\nabla^2 \pi = \begin{bmatrix} \frac{\partial^2 \pi}{\partial x^2} & \frac{\partial^2 \pi}{\partial x \partial y}\\ \frac{\partial^2 \pi}{\partial y \partial x} & \frac{\partial^2 \pi}{\partial x^2}\\ \end{bmatrix} = \begin{bmatrix} f''(x) & -1\\ -1 & g''(y) \end{bmatrix}\end{split}\]

Recall that an indefinite matrix has both positive and negative eigenvalues. Inspecting the Hessian of \(\pi\) shows that it must be indefinite because

\[\det(\nabla^2 \pi) = \prod_i \lambda_i = f''(x) g''(y) - 1 < 0.\]

Since \((x^*, y^*)\) is a critical point and the Hessian is indefinite for the entire surface, the critical point is also a saddle point.

Notice that for a fixed \(y\), \(\pi(x, \cdot)\) is strongly concave, so

\[\min_y \max_x \pi(x, y) \leq \min_y \pi(x^*, y) = \pi(x^*, y^*).\]

When \(x\) is fixed, \(\pi(\cdot, y)\) is strongly convex, so

\[\max_x \min_y \pi(x, y) \geq \max_x \pi(x, y^*) = \pi(x^*, y^*).\]

The foregoing implies

\[\max_x \min_y \pi(x, y) \geq \min_y \max_x \pi(x, y),\]

but the max-min inequality states that

\[\max_x \min_y \pi(x, y) \leq \min_y \max_x \pi(x, y).\]

Thus

\[\max_x \min_y \pi(x, y) = \pi(x^*, y^*) = \min_y \max_x \pi(x, y).\]

(3)

Recall that a sum of convex functions is convex, and adding a constant or a linear function to a function does not affect its convexity.

For each \(x \in \mathbb{R}\), the global maximum of \(\max_y xy - h(y)\) is at

\[\begin{split}\frac{d}{dy} \left( xy - h(y) \right) &= 0\\ x &= h'(y)\end{split}\]

according to the second derivative test

\[\frac{d^2}{dy^2} \left( xy - h(y) \right) = -h''(y) < 0.\]

Since \(h\) is strongly convex, \(h'\) is monotonically increasing, which means the intersection between \(x\) and \(h'(y)\) is unique. Thus, the inverse function \({h'}^{-1}(x) = y\) exists, is differentiable with the following derivative

\[\left[ {h'}^{-1} \right]'(x) = \frac{1}{h''\left( {h'}^{-1}(x) \right)} > 0,\]

and is also monotonically increasing. The Legendre transform of \(h\) can be simplified to

\[L_h(x) = \max_{y \in \mathbb{R}} xy - h(y) = x {h'}^{-1}(x) - h({h'}^{-1}(x)).\]

Examining both the first derivative

\[\begin{split}\frac{d}{dx} L_h(x) &= {h'}^{-1}(x) + x \left[ {h'}^{-1} \right]'(x) - h'\left( {h'}^{-1}(x) \right) \left[ {h'}^{-1} \right]'(x)\\ &= {h'}^{-1}(x)\end{split}\]

and second derivative

\[\frac{d^2}{dx^2} L_h(x) = \left[ {h'}^{-1} \right]'(x)\]

shows that \(L_h\) is strongly convex.

(4)

\[\begin{split}\max_{x \in \mathbb{R}} f(x) - L_{g(x)}(x) &= \max_{x \in \mathbb{R}} f(x) - \left[ \max_{y \in \mathbb{R}} xy - g(y) \right]\\ &= \max_{x \in \mathbb{R}} f(x) + \min_{y \in \mathbb{R}} - xy + g(y)\\ &= \max_{x \in \mathbb{R}} \min_{y \in \mathbb{R}} f(x) - xy + g(y)\\ &= \max_{x \in \mathbb{R}} \min_{y \in \mathbb{R}} \pi(x, y).\end{split}\]
\[\begin{split}\min_{y \in \mathbb{R}} \max_{x \in \mathbb{R}} \pi(x, y) &= \min_{y \in \mathbb{R}} \max_{x \in \mathbb{R}} f(x) - xy + g(y)\\ &= \min_{y \in \mathbb{R}} g(y) + \max_{x \in \mathbb{R}} -xy + f(x)\\ &= \min_{y \in \mathbb{R}} g(y) + \max_{x \in \mathbb{R}} (-y)x - (-f(x))\\ &= \min_{y \in \mathbb{R}} g(y) + L_{-f}(-y).\end{split}\]

(5)

The result from (2) implies that one can solve for \(L_h(x)\) and then proceed to solve \(L_{L_h}(z)\). Recall from (3) that

\[L_h(x) = \max_{z \in \mathbb{R}} xz - h(z) = x {h'}^{-1}(x) - h\left( {h'}^{-1}(x) \right)\]

where \({h'}^{-1}(x) = z\) and \(L_h'(x) = {h'}^{-1}(x)\).

\[\begin{split}L_{L_h}(z) &= \max_{x \in \mathbb{R}} zx - L_h(x)\\ &= z {L_h'}^{-1}(z) - L_h({L_h'}^{-1}(z))\\ &= zx - L_h(x)\\ &= zx - \left[ x {h'}^{-1}(x) - h\left( {h'}^{-1}(x) \right) \right]\\ &= zx - xz + h(z)\\ &= h(z).\end{split}\]

References

Bur

Jim Burke. Duality theory. https://www.math.washington.edu/ burke/crs/407/notes/section4.pdf. Accessed on 2016-11-27.

Ell

Mark Ellingham. The dual simplex method. https://math.vanderbilt.edu/ellingmn/6620/post45ds.pdf. Accessed on 2016-12-10.

Rad

Luis Rademacher. The simplex algorithm. https://ocw.mit.edu/courses/mathematics/18-433-combinatorial-optimization-fall-2003/lecture-notes/l1314.pdf. Accessed on 2016-12-03.

Wil

Mark Wildon. Notes on farkas’ lemma and the strong duality theorem for linear programming. http://www.ma.rhul.ac.uk/ uvah099/Maths/Farkas.pdf. Accessed on 2016-12-03.