The Total Least Squares (TLS) method is a generalization of the least squares (LS) method, and it minimizes ∥[E | r ]∥ F so that ( b +r ) ∈ Range ( A +E ), given A ∈ C m x n , with m ≥ n and b ∈ C m x 1 . The most popular TLS algorithm is based on the singular value decomposition (SVD) of [ A | b ]. However, in applications where the matrix A has a special structure, the SVD based methods may not always be appropriate, since they do not preserve the structure. Recently, a new formulation, called Total Least Norm (TLN), and algorithm for computing the TLN solution have been developed. The TLN preserves special structure of A or [ A | b ] , and can minimize a measure of error in the discrete L p norm, where p = 1, 2 or ∞. This chapter studies the application of the TLN method to various parameter estimation problems in which the perturbation matrix E or [ E | r ] keeps the Toeplitz structure like the data matrix A or [ A | b ]. In particular, the L 2 norm TLN method is compared with the ordinary LS and TLS method in deconvolution, transfer function modeling and linear prediction problems, and shown to improve the accuracy of the parameter estimates by a factor 2 to 40 at any signal-to-noise ratio.
[1]
Sabine Van Huffel,et al.
Total least squares problem - computational aspects and analysis
,
1991,
Frontiers in applied mathematics.
[2]
Haesun Park,et al.
Self-scaling fast rotations for stiff and equality-constrained linear least squares problems
,
1996
.
[3]
S. Haykin,et al.
Adaptive Filter Theory
,
1986
.
[4]
Gene H. Golub,et al.
Matrix computations
,
1983
.
[5]
J. Ben Rosen,et al.
Total Least Norm Formulation and Solution for Structured Problems
,
1996,
SIAM J. Matrix Anal. Appl..
[6]
C. Loan.
On the Method of Weighting for Equality Constrained Least Squares Problems
,
1984
.