definite over the whole space. Dickinson's result shows that this takes the CP degree to be even, the positivity test is automati-stronger hypothesis is necessary in that weaker assumptions tally satisfied near convergence. Thus convergence of a low-de-about the positive definiteness of finite correlation matrices are gree even CP would also demonstrate the extendability hypothe-not, in general, sufficient in the multidimensional case. sis. One may verify this extendability hypothesis by actually performing the Markov spectral estimation and then checking the REFERENCES resulting 2-D denominator polynomial for positivity using the [I] J. W. Woods, " Two-dimensional Markov spectral estimation, " IEEE method of [2]. A numerically simpler procedure to verify this extendability hypothesis would be to use the convolution poly-[2] N. K. Bose and S. Basu, " Tests for polynomial zeros on a polydisk nomial (CP) approach of [l] with low degree. In this case if one distinguished boundary, " IEEE Trans. A fact that by now seems hardly to merit a second thought is that the least-squares solution .G to an inconsistent set of n linear equations in m unknowns (n>m), Ax-y is determined by the solution of the so-called " normal equations " A'& = A'y. Especially when A'A is nonsingular, as wd shall assume, there seems hardly anything more to be said (otherwise we may use the so-called " Moore-Penrose " generalized inverse). If, perchance, someone (perhaps a new student) wondered why, when all the information is in the matrix A, it is necessary to form A'A and then (A'A)-'A', his unease would soon be stilled by the weight of tradition (or his teacher). Or so it seemed until in 1965 Golub,' developing some ideas of Householder, demonstrated that one could proceed more directly. Note first that minimkng IlAx-yll* is the same as minimizing I/TAX-7jll* for any orthogonal matrix T (i.e., T'T=I= ZT', ]]x]]~= x'x, the prime denoting transpose). Golub pointed out that if we choose T so that (TA)'=[R' 01, with (Q)'=[T e'], where R is a triangtdar full rank matrix, then II 7Xx-51i2= IlRx-?ll*+ 11412, so that the minimum is achieved by choosing x = ; = R-5;. Fromtherelation(TA)'=[R' O]weseethatA'A=R'RsothatRisa (triangular) square-root factor of the normal matrix A'A, which is the reason for the word factorization in the title. The point, of course, is that R is found directly from A without first forming A'A. This may be important in certain problems, where we may have …
[1]
G. Bierman.
A subroutine package for discrete estimation problems
,
1978,
1978 IEEE Conference on Decision and Control including the 17th Symposium on Adaptive Processes.
[2]
V. Klema.
LINPACK user's guide
,
1980
.
[3]
G. Stewart.
Introduction to matrix computations
,
1973
.
[4]
G. Bierman.
A Comparison of Discrete Linear Filtering Algorithms
,
1973,
IEEE Transactions on Aerospace and Electronic Systems.
[5]
Gerald J. Bierman,et al.
Numerical comparison of kalman filter algorithms: Orbit determination case study
,
1977,
Autom..
[6]
C. L. Thornton.
Triangular covariance factorizations for
,
1976
.
[7]
M. Morf,et al.
Square-root algorithms for least-squares estimation
,
1975
.
[8]
George A. F. Seber,et al.
Linear regression analysis
,
1977
.