Ordinary Least-Squares Problem

  • Definition

  • Interpretations

  • Solution via QR decomposition (full rank case)

  • Optimal solution (general case)

Definition

The Ordinary Least-Squares (OLS, or LS) problem is defined as

 min_x : |Ax-y|_2^2

where A in mathbf{R}^{m times n}, y in mathbf{R}^m are given. Together, the pair (A,y) is referred to as the problem data. The vector y is often referred to as ‘‘measurement‘‘ or “output” vector, and the data matrix A as the ‘‘design‘‘ or ‘‘input‘‘ matrix. The vector r := y-Ax is referred to as the residual error vector.

Note that the problem is equivalent to one where the norm is not squared. Taking the squares is done for convenience of the solution.

Interpretations

Interpretation as projection on the range

alt text 

We can interpret the problem in terms of the columns of A, as follows. Assume that A = [a_1, ldots, a_n], where a_j in mathbf{R}^m is the j-th column of A, j=1,ldots,n. The problem reads

 min_x : left| sum_{j=1}^n x_j a_j - y right|_2.

In this sense, we are trying to find the best approximation of y in terms of a linear combination of the columns of A. Thus, the OLS problem amounts to project (find the minimum Euclidean distance) the vector y on the span of the vectors a_j 's (that is to say: the range of A).

As seen in the picture, at optimum the residual vector Ax-y is orthogonal to the range of A.

Examples:

Interpretation as minimum distance to feasibility

The OLS problem is usually applied to problems where the linear Ax = y is not feasible, that is, there is no solution to Ax = y.

The OLS can be interpreted as finding the smallest (in Euclidean norm sense) perturbation of the right-hand side, delta y, such that the linear equation

 Ax = y+ delta y

becomes feasible. In this sense, the OLS formulation implicitly assumes that the data matrix A of the problem is known exactly, while only the right-hand side is subject to perturbation, or measurement errors. A more elaborate model, total least-squares, takes into account errors in both A and y.

Interpretation as regression

We can also interpret the problem in terms of the rows of A, as follows. Assume that A^T = [a_1, ldots, a_m], where a_i^T in mathbf{R}^m is the i-th row of A, i=1,ldots,m. The problem reads

 min_x : sum_{i=1}^m (y_i - a_i^Tx)^2.

In this sense, we are trying to fit of each component of y as a linear combination of the corresponding input a_i, with x as the coefficients of this linear combination.

Examples:

Solution via QR decomposition (full rank case)

Assume that the matrix A in mathbf{m times n} is tall (m ge n) and full column rank. Then the solution to the problem is unique, and given by

 x^ast = (A^TA)^{-1} A^Ty .

This can be seen by simply taking the gradient (vector of derivatives) of the objective function, which leads to the optimality condition A^T(Ax-y) = 0. Geometrically, the residual vector Ax-y is orthogonal to the span of the columns of A, as seen in the picture above.

We can also prove this via the QR decomposition of the matrix A: A = QR with Q a m times n matrix with orthonormal columns (Q^TQ = I_n) and R a n times n upper-triangular, invertible matrix. Noting that

 |Ax-y|_2^2 = x^TA^TAx-2x^TA^Ty + y^Ty = x^TR^TRx - 2x^TR^TQ^Ty + y^Ty = |Rx-Q^Ty|_2^2 +y^T(I-QQ^T)y,

and exploiting the fact that R is invertible, we obtain the optimal solution x^ast = R^{-1}Q^Ty. This is the same as the formula above, since

 (A^TA)^{-1} A^Ty = (R^TQ^TQR)^{-1}R^TQ^Ty = (R^TR)^{-1} R^TQ^Ty = R^{-1} Q^Ty.

Thus, to find the solution based on the QR decomposition, we just need to implement two steps:

  1. Rotate the output vector: set bar{y} = Q^Ty.

  2. Solve the triangular system Rx = bar{y} by backwards substitution.

In Matlab, the backslash operator finds the (unique) solution when A is full column rank.

Matlab syntax
>> x = A\y;

Optimal solution and optimal set

Recall that the optimal set of an minimization problem is its set of minimizers. For least-squares problems, the optimal set is an affine set, which reduces to a singleton when A is full column rank.

In the general case (A not necessarily tall, and /or not full rank) then the solution may not be unique. If x^0 is a particular solution, then x=x^0+z is also a solution, if z is such that Az=0, that is, z in mathbf{N}(A). That is, the nullspace of A describes the ambiguity of solutions. In mathematical terms:

                    arg : min_x : |Ax-b|_2 = x^0 + mathbf{N}(A).

The formal expression for the set of minimizers to the least-squares problem can be found again via the QR decomposition. This is shown here.