ExercisesSymmetric Matrices > Exercises
Interpretation of covariance matrixWe are given of points in . We assume that the average and variance of the data projected along a given direction does not change with the direction. In this exercise we will show that the sample covariance matrix is then proportional to the identity. We formalize this as follows. To a given normalized direction (), we associate the line with direction passing through the origin, . We then consider the projection of the points , , on the line , and look at the associated coordinates of the points on the line. These projected values are given by We assume that for any , the sample average of the projected values , , and their sample variance , are both constant, independent of the direction (wih ). Denote by and the (constant) sample average and variance. Justify your answer to the following as carefully as you can.
is zero.
is of the form , where is the identity matrix of order . (Hint: the largest eigenvalue of the matrix can be written as: , and a similar expression holds for the smallest eigenvalue.) Eigenvalue decompositionLet be two linearly independent vectors, with unit norm (). Define the symmetric matrix . In your derivations, it may be useful to use the notation .
Positive-definite matrices, ellipsoids
where is and symmetric, positive-definite, , and ? Describe your algorithm as precisely as possible. (You are welcome to provide a matlab code.) Draw the ellipsoid Least-squares estimation
where is a noise vector, and the input is , a full rank, tall matrix (), and . We do not know anything about , except that it is bounded: , with a measure of the level of noise. Our goal is to provide an estimate of via a linear estimator, that is, a function with a matrix. We restrict attention to unbiased estimators, which are such that when . This implies that should be a left inverse of , that is, . A example of linear estimator is obtained by solving the least-squares problem The solution is, when is full column rank, of the form , with . We note that , which means that the LS estimator is unbiased. In this exercise, we show that is the best unbiased linear estimator. (This is often referred to as the BLUE property.)
Show that is the best unbiased linear estimator (BLUE), in the sense that it solves the above problem. Hint: Show that any unbiased linear estimator can be written as with , and that is positive semi-definite. |