On least squares collocation

It is shown that the least squares collocation approach to estimating geodetic parameters is identical to conventional minimum variance estimation. Hence the least squares collocation estimator can be derived either by minimizing the usual least squares quadratic loss function or by computing a conditional expectation by means of the regression equation. When a deterministic functional relationship between the data and the parameters to be estimated is available, one can implement a least squares solution using the functional relation to obtain an equation of condition. It is proved the solution so obtained is identical to what is obtained through least squares collocation. The implications of this equivalance for the estimation of mean gravity anomalies are discussed. ON LEAST SQUARES COLLOCATION INTRODUCTION A characteristic of geodetic research is that numerous data types are available for estimating parameters of interest. The problems of combining heterogeneous geodetic data types to provide consistent estimates has lead some researchers to the belief that conventional least squares methods are inadequate. An alternative approach to geodetic data reduction problems called least squares collocation has been suggested by Moritz [ 1 ]. Some authors have claimed that least squares collocation is a more general and more powerful parameter estimation procedure than the classical least squares method [ 1, 2, 3, 4, 5]. It has also been asserted that least squares collocation is the only parameter estimation method which permits the simultaneous and optimal processing of heterogeneous data types [6, 7]. Other authors have disputed these claims [8, 9]. This note is an effort to settle what has become a confusing and contentious issue. It will be demonstrated that least squares collocation is an estimator of a type which is well known in conventional estimation theory. The presentation is elementary in content and should be intelligible to anyone familiar with the rudiments of probability theory. SOME PROPERTIES/OFaMINIMUM VARIANCE ESTIMATORS* Let X be a finite dimensional..vector of parameters tobe estimated. Sinc,e*the'param-; eters are not perfectly known it is legitimate to view X as a random vector. Also there is-no. loss.in generality in assuming the zero vector to be the expectation of X: Let the:covariance matrix of X be known. Thus: = -C (1) where C is positive definite. Assume the existence of a finite vector Y which defines a state which iS'directly observable. Hence Y is a random vector which is sampled by a measuring process. ; .' „ : Lacking data, the minimum variance estimate of X is the zero-vector. Butdntuitively it is clear that if random vectors Y and X are correlated and if, a realization Y' of Y is, available, . A it should be possible to obtain an improved estimate X of X. Several, criteria are available.Two of the most commonly used are ; A . . ' • • . . • Criterion A choose X as that vector which minimizes the conventional least squares quadratic form. A Criterion B choose X as the expectation vector of the conditional distribution of X given a realization Y' of Y. It will be shown that the application of either criterion leads; to the same estimator. A To'-obtain the improved estimate X,,itis necessary to precisely define the .correlation between Y and X. This is commonly donesin two ways which we will describe as a model 1 and model 2. In model 1 the correlation is,described by a linear stochastic equation. Y = SX + v,:, = Q (2) In model 2 the correlation is described in.terms of across covariance matrix.