Generalisation of Markoff's theorem and tests of linear hypotheses

The general theory of linear estimation, without involving the assumpions of nor mality or independence of vari?tes was first given by Markoff in his book Calculus of Probability published in Russian. Sheppard (1912 and 1914) working independently published some results which are approximately on the same lines as that of Markoff, but of a less general character. A distinct advance was made by Aitken (1935) who removed the unnecessary limitations in both Sheppard's and Markoff's results. Recently David and Neyman (1938) have published an article giving a slight extension of Markoff's theorem. A significant step in generalising the theory of linear estimation is due to Raj Chandra Bose (1943), who, for the first time, introduced the concept of nonestimable parametric functions. He distinguishes two types of linear functions of stochastic vari?tes ; the estimating functions and the error functions. A linear function BY' ? (b1yl-?rb2y2^-+^n2/ti) of the stochastic vari?tes is said to belong to error if E(BYr)^0. The totality of the independent vectors such as B constitute a vector space which is called the error space. The vector space orthogonal to this errror space is called the estimation space, and the best unbiassed estimate of any estimable parametric function comes out as the scalar product of the vector Y=(yl9 y' >...., yu) of the stochastic vari?tes and a vector C of the estimation space. The present paper is the result of ideas suggested by Bose's results (1943) and his post-graduate lectures in the Calcutta University in 1943-44 on the general theory of linear estimation and the fundamental structure of the analysis of variance. The object of the present paper is firstly to take up the most general problem in,the theory of linear estimation and get suitable generalisations of the previous results and secondly to derive tests of significance connected with linear hypotheses.