Outer Spherical ApproximationsLocalization > Back | Outer Spherical Approximations
Let us consider a candidate sphere , of center and radius . We formulate the condition that the sphere contains the intersection of all the spheres of center , as follows: This conditions appears to be hard to check. Outer spherical approximation via griddingA first approach, which works well only in moderate dimensions (2D or 3D), simply entails gridding the boundary of the intersection. In 2D, we can parametrize the boundary explicitly, as a curve, using an angular parameter. For each angular direction , we can easily find the point that is located on the boundary of the intersection, in the direction given by : we simply maximize such that the point is inside everyone of the spheres. (There is an explicit formula for the maximal value.) One we have computed points on the boundary, , , we simply solve the SOCP The gridding approach suffers from the fact that we need to grid finely to be able to guarantee that the spherical approximation we are computed indeed contains all the points on the intersection. On the other hand, too fine a grid results in a large number of constraints, which in turn adversely impacts the time needed to solve the above problem. Outer spherical approximation via Lagrange dualityWe now develop an alternate approach which relies on weak duality concepts. A sufficient conditionA sufficient condition for the above condition to hold is that there exist a non-negative vector such that Indeed, it is easily checked that if a point belongs to all the spheres, then indeed the sum above is less or equal than when . Minimizing the radius subject to the condition that the above holds for some will result in a conservative (larger) estimate of the amount of uncertainty. SOCP formulationAlternatively we can express the above condition as: The problem of minimizing the radius subject to the above condition can thus be written as It turns out that we can express the function in closed-form, as proven here: where for notational convenience we use the matrix and the vector with components , . (Remember that our measurement consistency condition holds, so that .) We are led to consider the sub-problem of minimizing over , with fixed. If , then we must have . If , must solve Again, this a convex quadratic problem, with a (unique) solution obtained by taking derivatives. A minimizer is , where that is, is the weighted average, with weights given by , of the points . Note that the expression remains valid when , since then we must have . Plugging the above value of in the problem leads to a problem in variable only: The above problem can be solved as an SOCP with rotated second-order cone constraints: A simpler formulation (consistent measurements)We can obtain a simpler formulation that involves QP only. We now use our measurement consistency assumption, which translates as . First we replace the variable with two new variables , and a new constraint. The new variables are defined as and the new constraint is . We obtain the new problem Since , the term inside the parentheses is non-negative, and thus is optimal. Hence the problem reduces to a QP: This provides the optimal radius. The optimal point is An improved condition via affine dualityTo improve our condition, we start from the equivalent condition for intersection containment: We now consider a relaxed of the form where , and the 's are now non-negative functions of . (Before, we chose them to have constant values.) The above is still hard to check in general, but becomes easy if we make the assumption that the functions are affine. We proceed by taking the vector to be of the form where , and will be variables. (The case before will be recovered upon setting , .) The condition above becomes a single quadratic condition on : The above can be equivalently expressed as a positive semi-definiteness condition on a symmetric matrix that is affine in : It remains to express the condition that the affine functions should be non-negative on the intersection. Precisely, we seek to enforce that for every , the condition holds (here, stands for the -th column of matrix ). Now, we apply direct Lagrange duality. A sufficient condition for the above to hold is that there exist non-negative numbers , , such that Again, the above is equivalent to a single linear matrix inequality in the variables and : Putting this together, we obtain the SDP in variables : |