Convex Analysis¶
Exercise 10.1¶
Recall that a polyhedron is defined as the intersection of a finite collection of generalized halfspaces (10.3). \(\mathbb{R}^n\) can be represented as
where \(\mathbf{a} = \boldsymbol{0}\) and \(b \geq 0\).
Exercise 10.2¶
Let \(\zeta^*(b)\) denote the corresponding optimal objective function value for the dual linear program
Since \(\xi^*(b) < \infty\) for all \(b \in \mathbb{R}^m\), the weak duality theorem guarantees that \(\zeta^*(b) > -\infty\). Furthermore, notice that each instance of \(\zeta^*(b)\) shares the same feasible region. This region can be described as a convex polyhedron (10.3), or more precisely a convex polytope according to Exercise 10.1 and Caratheodory’s theorem.
The foregoing enables the application of the fundamental theorem of linear programming which states that the solution \(y_i\) corresponding to \(\zeta^*(b_i)\) must occur at a vertex (extreme point) of the feasible set. Note that if the optimal solution occurs at two adjacent vertices of the feasible set, then the line segment joining the two vertices represents infinitely many solutions.
Given \(\left\{ \zeta^*(b_i), y_i \right\}_{i = 0}^1\),
where
Observe that
when the optimal solution to \(\left\{ \zeta^*(b_i) \right\}_{i = 0}^2\) is at same vertex or \(y_2\) lies on the line segment between the two adjacent vertices \(y_0\) and \(y_1\). When \(y_0\) and \(y_1\) are non-adjacent vertices, \(\left\{ \Delta_{i2} \right\}_{i = 0}^1\) can be viewed as translating \(y\) towards the optimal vertex \(y_2\). Removing the translations would place \(y\) at a non-optimal location that is interior to a line segment not perpendicular to the direction of improvement of the “isoprofit” hyperplanes i.e.
Therefore, \(\zeta^*(b)\) is convex. Consequently, by the strong duality theorem, \(\xi^*(b)\) is concave due to its formulation as a maximization problem.
Exercise 10.3¶
Since \(P\) and \(\tilde{P}\) could each represent an empty polyhedron, the proof needs to cover two additional cases. The existing proof of Theorem 10.4 still holds for the generalized halfspaces when the polyhedra are nonempty.
\(P = \emptyset\) and \(\tilde{P} = \emptyset\)¶
Suppose
which is equivalent to saying both systems of inequalities are infeasible.
Since \(P\) and \(\tilde{P}\) are empty sets, Farkas’ Lemma (10.8) guarantees that \(Ay = 0\) and \(\tilde{A} y = 0\). Consequently, \(H = \emptyset\) and \(\tilde{H} = \emptyset\) because (10.7) still holds. By inspection, the generalized halfspaces are disjoint.
To see that \(P \subset H\) and \(\tilde{P} \subset \tilde{H}\), recall that the empty set is a subset of every set including itself.
\(P \neq \emptyset\) and \(\tilde{P} = \emptyset\)¶
Suppose
The polyhedra are clearly disjoint by definition of disjoint sets, so (10.5) through (10.7) still holds. However, even though \(P\) is nonempty and the negation of Farkas’ Lemma states that
the fact that \(\tilde{\mathbf{A}}^\top \tilde{y} = 0\) forces \(\mathbf{A}^\top y = 0\). Furthermore, in order to satisfy (10.7), \(\tilde{b}^\top \tilde{y} < 0\). Consequently, \(H = \mathbb{R}^n\) and \(\tilde{H} = \emptyset\). By inspection, the generalized halfspaces are disjoint as well as \(P \subset H\) and \(\tilde{P} \subset \tilde{H}\).
Exercise 10.4¶
The dual of the primal LP
is
The possible optimal solutions are
and
Notice that each primal (dual) solution is at an extreme point and the line segment joining the two primal (dual) solutions represents infinitely many solutions.
The strictly complementary solutions can be defined as
where \(0 < \alpha < 1\).
Exercise 10.5¶
The proof for Theorem 10.3 does not address whether the vectors \(z_j\) are linearly independent, which determines the rank of \(A \in \mathbb{R}^{m \times n}\). [Ang] presents an algebraic proof of the Fundamental Theorem of Linear Programming showing that there exists at least one basic solution as long as \(\text{rank}(A) = m\). Note that Theorem 10.3 implies at most \(m + 1\) points are needed. It does not cover the scenarios where \(n < m + 1\).
Exercise 10.6¶
Consider the following primal LP
and its dual
The dual written with slack variables is
If there is an optimal primal solution \(\mathbf{x}^*\) for which \(x^*_j > 0\), then the \(j\text{th}\) dual slack variable is irrelevant.
Suppose \(x_j = 0\) for every optimal solution. By the strong duality theorem, all the dual constraints are satisfied and
\(t > 0\)¶
Notice that \(\mathbf{y}^* = \mathbf{y} / t\) and \(\mathbf{z}^* = (\mathbf{z} + \mathbf{e}_j) / t\) are optimal solutions that satisfies (10.10).
\(t = 0\)¶
Notice that for any optimal dual solution \((\mathbf{y}^*, \mathbf{z}^*)\), \((\mathbf{y}^* + \mathbf{y}, \mathbf{z}^* + \mathbf{z} + \mathbf{e}_j)\) is also an optimal dual solution satisfying (10.10).
Exercise 10.7¶
The dual of the linear program (LP)
is
The proof for this (a.k.a. Clark’s Theorem [Ton]) is by contradiction. Suppose the linear program has feasible solutions, the dual has null variables, and the set of primal feasible solutions is bounded. By the fundamental theorem of linear programming, there exist optimal solutions.
Consider the following dual LP:
where the dual null variable \(y_i = 0\) for all feasible solutions. Since its constraints matches the original dual problem, this LP is feasible and has an optimal solution at zero.
The corresponding primal LP is
By the strong duality theorem, the primal has an optimal solution such that \(Ax' + w' = -e_i\) and \(x', w' \geq 0\).
Let \((x, w)\) be any feasible solution to the original LP. Since the feasible region is a convex polytope, \(x + \lambda x'\) is feasible for the primal for any \(\lambda \geq 0\) and its slack is \(w + \lambda (w' + e_j)\). By inspection, there is no upper bound for the maximization of \(c^\top \left( x + \lambda x' \right)\). This contradiction shows that the feasible region for the original primal LP is unbounded.
References