Scaled total least squares fundamentals

Summary. The standard approaches to solving overdetermined linear systems $Bx \approx c$ construct minimal corrections to the vector c and/or the matrix B such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to c, while in data least squares (DLS) it is restricted to B. In scaled total least squares (STLS) [22], corrections to both c and B are allowed, and their relative sizes depend on a real positive parameter $\gamma$. STLS unifies several formulations since it becomes total least squares (TLS) when $\gamma=1$, and in the limit corresponds to LS when $\gamma\rightarrow 0$, and DLS when $\gamma\rightarrow \infty$. This paper analyzes a particularly useful formulation of the STLS problem. The analysis is based on a new assumption that guarantees existence and uniqueness of meaningful STLS solutions for all parameters $\gamma >0$. It makes the whole STLS theory consistent. Our theory reveals the necessary and sufficient condition for preserving the smallest singular value of a matrix while appending (or deleting) a column. This condition represents a basic matrix theory result for updating the singular value decomposition, as well as the rank-one modification of the Hermitian eigenproblem. The paper allows complex data, and the equivalences in the limit of STLS with DLS and LS are proven for such data. It is shown how any linear system $Bx \approx c$ can be reduced to a minimally dimensioned core system satisfying our assumption. Consequently, our theory and algorithms can be applied to fully general systems. The basics of practical algorithms for both the STLS and DLS problems are indicated for either dense or large sparse systems. Our assumption and its consequences are compared with earlier approaches.

[1]  J. H. Wilkinson The algebraic eigenvalue problem , 1966 .

[2]  E. Haynsworth Determination of the inertia of a partitioned Hermitian matrix , 1968 .

[3]  C. Paige Bidiagonalization of Matrices and Solution of Linear Equations , 1974 .

[4]  B. Parlett The Symmetric Eigenvalue Problem , 1981 .

[5]  Michael A. Saunders,et al.  LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares , 1982, TOMS.

[6]  Michael A. Saunders,et al.  Algorithm 583: LSQR: Sparse Linear Equations and Least Squares Problems , 1982, TOMS.

[7]  G. Stewart,et al.  A generalization of the Eckart-Young-Mirsky matrix approximation theorem , 1987 .

[8]  Sabine Van Huffel,et al.  Total least squares problem - computational aspects and analysis , 1991, Frontiers in applied mathematics.

[9]  Eric M. Dowling,et al.  The Data Least Squares Problem and Channel Equalization , 1993, IEEE Trans. Signal Process..

[10]  Volker Mehrmann,et al.  Minimization of the norm, the norm of the inverse and the condition number of a matrix by completion , 1995, Numer. Linear Algebra Appl..

[11]  J. Navarro-Pedreño Numerical Methods for Least Squares Problems , 1996 .

[12]  Jang-Gyu Lee,et al.  On updating the singular value decomposition , 1996, Proceedings of International Conference on Communication Technology. ICCT '96.

[13]  Gene H. Golub,et al.  Regularization by Truncated Total Least Squares , 1997, SIAM J. Sci. Comput..

[14]  B. Rao Unified treatment of LS, TLS, and truncated SVD methods using a weighted TLS framework , 1997 .

[15]  S. Huffel,et al.  Direct and neural techniques for the data least squares problem , 2000 .

[16]  Zdenek Strakos,et al.  Bounds for the least squares distance using scaled total least squares , 2002, Numerische Mathematik.