Disciplined Convex Programming and CVXFor more details on disciplined convex programming, see here.
Basic ideaIf an optimization problem turns out to be convex, special-purpose algorithms can be used to solve it efficiently. The difficulty for the modeler is that proving convexity of a problem is a very hard task in general. There is no magic tool that allows a user to plug in any optimization problem and hope that the tool recognizes convexity and exploits it in the solution method. In disciplined convex programming, the set of problems that are allowed is restricted: problems are constructed as convex from the outset. Hence, convexity verification is automatic. In the software package CVX, the objective and constraint functions that are allowed are built from a library of basic examples of convex functions. Calculus rules or transformations then allow to handle functions beyond the ones in the library. The rulesetThe ruleset corresponds to the operations that preserve convexity, and includes the following.
is convex.
For example, if CVX encounters a constraint of the form , with convex functions in the library, then it adds two new convex constraints , , and replace the original constraint with . This way, only ‘‘basic’’ constraints are to be handled when solving the problem. ExampleConsider the problem of sparse linear least-squares: where , are given. A CVX code for solving this is almost a verbatim version of the above: CVX syntax
% input: mxn data matrix A, m-vector y % output: optimal solution x in Rn cvx_begin variable x(n,1) minimize( (A*x-y)'*(A*x-y) + norm(x,1) ) cvx_end CVX recognizes the first term as an affine composition involving the squared function , which is part of the library. The second term is an -norm, and is also part of the library. CVX knows how to handle this norm, by introducing a new variable , replacing the norm by , and adding the constraints , . The problem is replaced with the equivalent form Using an interior-point method amounts to handle the inequality constraints via a ‘‘barrier’’ term in the objective. The problem is further replaced with where is a small parameter. A final step is to add extra variables to as to make sure the objective only contains basic functions: The problem above has linear equality constraints only. It involves an objective function given as a sum of functions that are in the library only. Hence, its gradient and hessian (the objects needed to implement an interior-point method) can be easily derived from those of functions in the library. Some do's and don'ts of CVXConsider the following variation on the problem above: where the -norm is squared. We can be tempted to write the above problem in CVX as follows. A wrong CVX implementation
cvx_begin variable x(n,1) minimize( (A*x-y)'*(A*x-y) + norm(x,1)^2 ) cvx_end However, CVX will fail with this expression of the problem, as it cannot recognize the convexity of the squared -norm function as written. The reason is that the square of a convex function is only convex if that function is non-negative. To square a convex function that is non-negative, we use the square_pos function. CVX implementation
cvx_begin variable x(n,1) minimize( (A*x-y)'*(A*x-y) + square_pos(norm(x,1)) ) end cvx_end Pros and cons of CVXThis approach does restrict the available models:
The good news are:
|