Weak Duality
Lagrange dual problemPrimal problemIn this section, we consider a possibly non-convex optimization problem
We will refer to the above as the primal problem, and to the decision variable in that problem, as the primal variable. One purpose of Lagrange duality is to find a lower bound on a minimization problem (or an upper bounds for a maximization problem). Later, we will use duality tools to derive optimality conditions for convex problems. LagrangianTo the problem we associate the Lagrangian , with values
We observe that, for every feasible , and every , is bounded below by :
The Lagrangian can be used to express the primal problem (ref{eq:convex-pb-L11}) as an unconstrained one. Precisely:
Lagrange dual functionWe then define the Lagrange dual function (dual function for short) as the function with values
From the bound above, by minimizing over in the right-hand side, we obtain
Lagrange dual problemThe best lower bound that we can obtain using the above bound is , where
Case with equality constraintsIf equality constraints are present in the problem, we can represent them as two inequalities. It turns out that this leads to the same dual, as if we would directly use a single dual variable for each equality constraint, which is not restricted in sign. To see this, consider the problem
Thus, inequality constraints in the original problem are associated with sign constraints on the corresponding multipliers, while the multipliers for the equality constraints are not explicitly constrained. Weak duality and minimax inequalityWeak duality theoremWe have obtained: Theorem: weak duality
For a general (possibly non-convex) problem
Geometric interpretationAssume that there is only one inequality constraint in the primal problem (), and let ( {cal G} := left{ (f_1(x),f_0(x)) : x in mathbf{R}^n right}.
)
We have
Minimax inequalityWeak duality can also be obtained as a consequence of the following minimax inequality, which is valid for any function of two vector variables , and any subsets , :
Weak duality is indeed a direct consequence of the above. To see this, start from the unconstrained formulation (ref{eq:pb-primal-unconstr-L11}), and apply the above inequality with the Lagrangian of the original problem, and the vector of Lagrange multipliers. Interpretation as a gameWe can interpret the minimax inequality result in the context of a one-shot, zero-sum game. Assume that you have two players A and B, where A controls the decision variable , while B controls . We assume that both players have full knowledge of the other player’s decision, once it is made. The player A seeks to minimize a payoff (to player B) , while B seeks to maximize that payoff. The right-hand side in (ref{eq:min-max-theorem-L11}) is the optimal pay-off if the first player is required to play first. Obviously, the first player can do better by playing second, since then he or she knows the opponent’s choice and can adapt to it. ExamplesLinear optimization problem, inequality formConsider the LP in standard inequality form
The Lagrange function is ( {cal L}(x,lambda) = c^Tx + lambda^T(Ax-b)
)
and the corresponding dual function is
Weak duality implies that
Linear optimization problem, standard formWe can also consider an LP in standard form:
The Lagrange function is
Minimum Euclidean distance problemConsider the problem of minimizing the Euclidean distance to a given affine space:
A non-convex boolean problemFor a given matrix , we consider the problem
The Lagrangian writes
To find the dual function, we need to maximize the Lagrangian with respect to the primal variable . We express this problem as
The Lagrange relaxation of the primal problem can be interpreted geometrically, as follows. |