Semidefinite Optimization
Definition and standard formsDefinitionWe say that a problem is a semidefinite optimization problem (SDP) if it is a conic optimization problem of the form
Standard formOften, it is useful to think of as a matrix variable. The following standard form formalizes this. First note that, without loss of generality, we may assume that . Let us take an example with to illustrate why. Indeed, the condition that is the same as . Thus, by imposing that certain elements of the matrix be zero (those outside the diagonal blocks of size ), we can always reduce our problem to the case when the cone is the semidefinite cone itself, and the matrix variable is constrained to be block diagonal (with block sizes , ), and live in that cone. A standard form for the SDP model is thus
Inequality formAn alternate standard form, called the inequality standard form, is
Special casesAs discussed above, we can use the above formalism to handle multiple LMIs, using block-diagonal matrices. To illustrate this, we show that SDPs contain LPs as a special case. Indeed, the LP
Similarly, SDPs contain SOCPs, by virtue of the following result, already mentioned in XXX: for every ,
ExamplesSDP relaxation of boolean problemsIn lecture 5, we have seen LP relaxations for boolean LPs, which are LPs with the added non-convex requirement that the decision variable should be a boolean vector. This approach does not easily extend to the case when the problem involves quadratics. To illustrate this, consider the max-cut problem. We are given a graph with vertices labeled , with weights for every pair of vertices . We seek a cut (a subset of ) such that the total weight of all the edges that leave is maximized. This can be expressed as the quadratic boolean problem
To explain the relaxation, let us embed the above problem into the more general problem of maximizing a given quadratic form over a boolean set:
We can express the problem as
Relaxing (meaning: ignoring) the rank constraint leads to the upper bound , where
Non-convex quadratic optimizationThe approach used above can be applied to general non-convex quadratic optimization, which has the form
We note that , with the affine functions:
%=== Optimization of ellipsoids. %=== Polynomial optimization. Stability analysis of uncertain dynamical systems. Consider a time-varying dynamical system of the form
Let us denote by the largest singular value norm. Clearly, if for every , we have for some , then , which shows that the state vector goes to zero at least as fast as . The norm condition ‘‘ for every ’’ is conservative, since it implies that the state decreases monotonically. To refine the norm-based sufficient condition for asymptotic stability, we can use scaling. For a given invertible matrix, we define . In the new state space defined by , the state equations become , with . The asymptotic stability of the system is independent of the choice of , since converges to zero if and only if does. However, the norm-based sufficient condition for asymptotic stability is not independent of the choice of . Indeed, if we impose that condition on , we obtain . In turn, this condition writesfootnote{Here, we use the fact that for a matrix , the condition is equivalent to (try to show this).}
We conclude that the existence of such that
Note that since for fixed , the set is convex, requiring that it contains the finite set is the same as requiring that it contains its convex hull. So the above condition also ensures stability of the system, when is allowed to be any time-varying convex combination of the matrices , . Also, note that when , the above condition turns out to be exact, and equivalent to the condition that all the eigenvalues of the matrix lie in the interior of the unit circle of the complex plane. This is the basic result of the so-called Lyapunov asymptotic stability theorem for linear systems. |