A CONDITIONAL DERIVATION OF RESIDUAL MAXIMUM LIKELIHOOD

where Y is an n× 1 vector of responses, X is an n× p design matrix of full column rank and ∼ N(0, σΩ(γ)) is a random vector. The matrix Ω(γ) is assumed positive definite and is dependent on a parameter vector γ. The aim is to estimate β, σ and γ. Patterson & Thompson (1971) propose using a restricted likelihood function to estimate σ and γ. This estimate is substituted in the generalised least squares estimate of β for given γ to provide the residual maximum likelihood (REML) estimate of β. The actual derivation of the likelihood function was somewhat involved and this and a Bayesian perspective prompted Harville (1974) to develop an alternative approach. Cooper & Thompson (1977) also provide another derivation. The aim of this paper is to present an alternative derivation of the likelihood function which uses some simple results and which may be of use in teaching. The approach given here is related to the derivation given by Harville (1974). Briefly Harville (1974) transforms from Y to β = (X ΩX)X ΩY and to a set of n− p linear functions which have zero mean, L 2 Y in the development below. Using the independence of these sets of linear functions allows the likelihood to be derived. The main difference between the derivation of Harville and that given here is that the transformation used here is free of Ω. Suppose L1 and L2 are n×p and n× (n−p) matrices respectively, both of full column rank, satisfying