• Feasible set

  • What is a solution? Optimal value, optimal set

  • Local vs. global optima

Consider the optimization problem

 (P) hspace{.1textwidth} min_x : f_0(x) ~:~ f_i(x) le 0, ;; i=1,ldots, m.

Feasible set

The feasible set of problem (P) is defined as

 mathbf{X} :=  left{ x in mbox{bf R}^n ~:~  f_i(x) le 0, ;; i=1,ldots, m right} .

A point x is said to be feasible for problem (P) if it belongs to the feasible set mathbf{X}, that is, it satisfies the constraints.

Example: In the toy optimization problem, the feasible set is the ‘‘box’’ in mathbf{R}^2, described by -1 le x_1 le 2, 0 le x_2 le 3.

The feasible set may be empty, if the constraints cannot be satisfied simultaneously. In this case the problem is said to be infeasible.

What is a solution?

In an optimization problem, we are usually interested in computing the optimal value of the objective function, and also often a minimizer, which is a vector which achieves that value, if any.

Feasibility problems

Sometimes an objective function is not provided. This means that we are just interested in finding a feasible point, or determine that the problem is infeasible. By convention, we set f_0 to be a constant in that case, to reflect the fact that we are indifferent to the choice of a point x as long as it is feasible.

Optimal value

The optimal value of the problem is the value of the objective at optimum, and we denote it by p^ast:

 p^ast := min_x : f_0(x) ~:~ f_i(x) le 0, ;; i=1,ldots, m.

Example: In the toy optimization problem, the optimal value is p^ast = -10.2667.

Optimal set

The optimal set (or, set of solutions) of problem (P) is defined as the set of feasible points for which the objective function achieves the optimal value:

 mathbf{X}^{rm opt} :=  left{ x in mathbf{R}^n ~:~  f_0(x) = p^ast, ;; f_i(x) le 0, ;; i=1,ldots, m right} .

We take the convention that the optimal set is empty if the problem is not feasible.

A standard notation for the optimal set is via the argmin notation:

 mathbf{X}^{rm opt} = argmin_{x in mathbf{X}} : f_0(x) .

A point x is said to be optimal if it belongs to the optimal set. Optimal points may not exist, and the optimal set may be empty. This can be due to the fact that the problem is infeasible. Or, it may be due to the fact that the optimal value is only attained in the limit.

Example: The problem

 displaystylemin_x : e^{-x}

has no optimal points, as the optimal value p^ast = 0 is only attained in the limit x rightarrow +infty.

If the optimal set is not empty, we say that the problem is attained.

Suboptimality

The epsilon-suboptimal set is defined as

 mathbf{X}_epsilon := left{ x in mathbf{R}^n ~:~ f_i(x) le 0, ;; i=1,ldots,m, ;; f_0(x) le p^ast + epsilon right} .

(With our notation, mathbf{X}_0 = mathbf{X}_{rm opt}.) Any point in the epsilon-suboptimal set is termed epsilon-suboptimal.

This set allows to characterize points which are close to being optimal (when epsilon is small). Usually practical algorithms are only able to compute suboptimal solutions, and never reach true optimality.

Example: Nomenclature of the two-dimensional toy problem.

Local vs. global optimal points

A point z is locally optimal if there is a value R>0 such that z is optimal for problem

 min_x : f_0(x) ~:~ f_i(x) le 0, ;; i=1,ldots, m, ;; |z-x|_2 le R

In other words, a local minimizer x minimizes f_0, but only for nearby points on the feasible set. So the value of the objective function is not necessarily the optimal value of the problem.

The term globally optimal (or, optimal for short) is used to distinguish points in the optimal set from local minima.

Example: a function with local minima.

Local optima may be described as the curse of general optimization, as most algorithms tend to be trapped in local minima if they exist.