Convex Problems
Mathematical programming problemsDefinitionA mathematical programming problem is one of the form
The term “programming” (or “program”) does not refer to a computer code. It is used mainly for historical purposes. A more rigorous (but less popular) term is “optimization problem”. The term “subject to” is often replaced by a colon. By convention, the quantity is set to if the problem is not feasible. The optimal value can also assume the value , in which case we say that the problem is unbounded below. An example of a problem that is unbounded below is an unconstrained problem with , with domain . The set ( {cal X} = left{ x in mathbf{R}^n : f_i(x) le 0, ;; i=1,ldots, m right} ) is called the feasible set. Any is called feasible (with respect to the specific optimization problem at hand). Alternate representationsSometimes, the model is described in terms of the feasible set, as follows:
Also, sometimes a maximization problem is considered:
Feasibility problemsIn some instances, we do not care about any objective function, and simply seek a feasible point. This so-called feasibility problem can be formulated in the standard form, using a zero (or constant) objective. Convex optimization problemsStandard formThe problem
Examples:
OptimalityLocal and global optimaA feasible point is a globally optimal (optimal for short) if . A feasible point is a locally optimal if there exist such that equals the optimal value of problem (ref{eq:convex-pb-L4}) with the added constraint . That is, solves the problem ‘‘locally’’. For convex problems, any locally optimal point is globally optimal. Indeed, let be a local minimizer of on the set , and let . By definition, . We need to prove that . There is nothing to prove if , so let us assume that . By convexity of and , we have , and:
Optimal setThe optimal set, , is the set of optimal points. This set may be empty: for example, the feasible set may be empty. Another example is when the optimal value is only reached in the limit; think for example of the case when , , and there are no constraints. In any case, the optimal set is convex, since it can be written ( {cal X}^{rm opt} = left{ x in mathbf{R}^n : f_0(x) le p^ast, ;; x in {cal X} right} . ) Optimality conditionWhen is differentiable, then we know that for every ,
Examples:
Equivalent problemsWe can transform a convex problem into an equivalent one via a number of transformations. Sometimes the transformation is useful to obtain an explicit solution, or is done for algorithmic purposes. The transformation does not necessarily preserve the convexity properties of the problem. Here is a list of transformations that do preserve convexity. Epigraph formSometimes it is convenient to work with the equivalent epigraph form:
Implicit constraintsEven though some problems appear to be unconstrained, they might contain implicit constraints. Precisely, the problem above has an implicit constraint , where is the problem’s domain ( {cal D} := {bf dom} f_0 bigcap_{i=1}^m {bf dom} f_i .
)
For example, the problem
Making explicit constraints implicitThe problem in standard form can be also written in a form that makes the constraints that are explicit in the original problem, implicit. Indeed, an equivalent formulation is the unconstrained convex problem
A less radical approach involves the convex problem wih one inequality constraint
The above transformations show the versatility of the convex optimization model. They are also useful in the analysis of such problems. Equality constraint eliminationWe can eliminate the equality constraint , by writing them as , with a particular solution to the equality constraint, and the columns of span the nullspace of . Then we can rewrite the problem as one without equality constraints:
Introducing equality constraintsWe can also introduce equality constraints in the problem. There might be several justifications for doing so: to reduce a given problem to a standard form used by off-the-shelf algorithms, or to use in decomposition methods for large-scale optimization. The following example shows that introducing equality constraint may allow to exploit sparsity patterns inherent to the problem. Consider
Slack variablesSometimes it is useful to introduce slack variables. For example, the problem with affine inequalities
Minimizing over some variablesWe may ‘‘eliminate’’ some variables of the problem and reduce it to one with fewer variables. This operation preserves convexity. Specifically, if is a convex function of the variable , and is partitioned as , with , , , then the function
Here is an example where it is: consider the problem
ComplexityComplexity of an optimization problem refers to the difficulty of solving the problem on a computer. Roughly speaking, complexity estimates provide the number of flops needed to solve the problem to a given accuracy of the objective . That problem is usually expressed in terms of the problem size, as well as the accuracy . The complexity usually refers to a specific method to solve the problem. For example, we may say that solving a dense linear optimization problem to accuracy with variables and constraints using an ‘‘interior-point’’ methodfootnote{This term refers to a class of methods that are provably efficient for a large class of convex optimization problems.} grows as . The complexity of an optimization problem also depends on its structure. Two seemingly similar problem may require a widely different computational effort to solve. Some problems are “NP-hard”, which roughly means that they cannot be solved in reasonable time on a computer. As an example, the quadratic programming problem (ref{eq:quad-prob-L1}) seen above is “easy” to solve, however the apparently similar problem obtained by allowing to have negative eigenvalues is NP-hard. In the early days of optimization, it was thought that linearity was what distinguished a hard problem from an easy one. Today, it appears that convexity is the relevant notion. Roughly speaking, a convex problem is easy. In the next section, we will refine this statement. |