Best Linear Unbiased Estimation in Linear Models

where X is a known n × p model matrix, the vector y is an observable ndimensional random vector, β is a p × 1 vector of unknown parameters, and ε is an unobservable vector of random errors with expectation E(ε) = 0, and covariance matrix cov(ε) = σV, where σ > 0 is an unknown constant. The nonnegative definite (possibly singular) matrix V is known. In our considerations σ has no role and hence we may put σ = 1. As regards the notation, we will use the symbolsA,A,A, C (A), C (A), and N (A) to denote, respectively, the transpose, a generalized inverse, the Moore–Penrose inverse, the column space, the orthogonal complement of the column space, and the null space, of the matrix A. By (A : B) we denote the partitioned matrix with A and B as submatrices. By A we denote any matrix satisfying C (A) = N (A) = C (A). Furthermore, we will write PA = AA = A(AA)A to denote the orthogonal projector (with respect to the standard inner product) onto C (A). In particular, we denote H = PX and M = In −H. One choice for X ⊥ is of course the projector M.

[1]  William Kruskal,et al.  When are Gauss-Markov and Least Squares Estimators Identical? A Coordinate-Free Approach , 1968 .

[2]  George Zyskind,et al.  On Canonical Forms, Non-Negative Covariance Matrices and Best and Simple Least Squares Linear Estimators in Linear Models , 1967 .

[3]  Sujit Kumar Mitra,et al.  GAUSS-MARKOV ESTIMATION WITH AN INCORRECT DISPERSION MATRIX , 1973 .

[4]  T. W. Anderson On the theory of testing serial correlation , 1948 .

[5]  C. R. Rao,et al.  Least squares theory using an estimated dispersion matrix and its application to measurement of signals , 1967 .

[6]  R. Christensen Plane Answers to Complex Questions: The Theory of Linear Models. , 1997 .

[7]  Simo Puntanen,et al.  Two matrix-based proofs that the linear estimator Gy is the best linear unbiased estimator , 2000 .

[8]  Simo Puntanen,et al.  On the equality of the BLUPs under two linear mixed models , 2011 .

[9]  Simo Puntanen,et al.  The Equality of the Ordinary Least Squares Estimator and the Best Linear Unbiased Estimator , 1989 .

[10]  Geoffrey S. Watson,et al.  Linear Least Squares Regression , 1967 .

[11]  S. Puntanen,et al.  Effect of adding regressors on the equality of the BLUEs under two linear models , 2010 .

[12]  C. Radhakrishna Rao,et al.  Projectors, generalized inverses and the BLUE's , 1974 .

[13]  C. Radhakrishna Rao,et al.  A study of the influence of the ‘natural restrictions’ on estimation problems in the singular Gauss-Markov model , 1992 .

[14]  Simo Puntanen,et al.  Equality of BLUEs or BLUPs under two linear models using stochastic restrictions , 2010 .

[15]  C. Radhakrishna Rao,et al.  Unified theory of linear estimation , 1971 .

[16]  Jarkko Isotalo,et al.  Linear Prediction Sufficiency for New Observations in the General Gauss–Markov Model , 2006 .

[17]  George Zyskind,et al.  On Best Linear Estimation and General Gauss-Markov Theorem in Linear Models with Arbitrary Nonnegative Covariance Structure , 1969 .