Complexity of Optimization Problems

Hard vs. easy problems

Not all optimization problems are created equal. Some problem classes (such as finding a solution to a set of linear equalities or inequalities) are easy, that is, they can be solved in a reasonable amount of time and memory on a computer. Others are inescapably hard: the travelling salesman problem, which involves finding a path among a combinatorial number of choices, is a famous example.

The field of complexity theory gives a rough, but very useful, guideline in deciding the ‘‘hardness’’ of a problem. In complexity theory, the focus is on how the worst-case computing time grows as the problem size grows. Problem size can be complicated notion, but we can roughly think of it as the number of variables and constraints in an optimization problem with (obviously the cost of evaluating the objective and constraints for a given choice of the variable also counts).

In the late 80s, it was believed that what made an optimization problem hard was its non-linear components. In other words: linear problems are easy; non-linear ones are hard. A major result, due to Nesterov and Nemirovski (1989), showed that this is not the case: the big watershed is between convex and non-convex problems. Roughly speaking, convex problems are easy (and that includes linear programming problems); non-convex ones are hard. Of course, this statement needs to be qualified. Not all convex problems are easy, but a (reasonably large) subset. Conversely, some non-convex problems are actually easy; for example some path planning problems can be solved in linear time. In this class, we will explore the contours of the set of convex problems that are easy to solve.