Modified gradient algorithm for total least square filtering

Abstract A neural approach for the parameter estimation of adaptive FIR filters for linear system identification is presented in the paper. It is based on a linear neuron with a modified gradient algorithm, capable of resolving the total least squares (TLS) problem present in this kind of estimation, where noisy errors affect not only the observation vector but also the data matrix. The learning rule is analyzed mathematically. The results of computer simulations are given to illustrate that the neural approach considerably outperforms the existing TLS methods when a larger learning factor is used or the signal-noise-ratio (SNR) is lower.

[1]  V. J. Mathews Adaptive polynomial filters , 1991, IEEE Signal Processing Magazine.

[2]  M. Schetzen The Volterra and Wiener Theories of Nonlinear Systems , 1980 .

[3]  Zheng Bao,et al.  Total least mean squares algorithm , 1998, IEEE Trans. Signal Process..

[4]  Harold J. Kushner,et al.  wchastic. approximation methods for constrained and unconstrained systems , 1978 .

[5]  Fa-Long Luo,et al.  Neural network approach to the TLS linear prediction frequency estimation problem , 1996, Neurocomputing.

[6]  G. Golub Some modified eigenvalue problems , 1971 .

[7]  Giansalvo Cirrincione,et al.  Linear system identification using the TLS EXIN neuron , 1999, Neurocomputing.

[8]  Erkki Oja,et al.  Modified Hebbian learning for curve and surface fitting , 1992, Neural Networks.

[9]  Fa-Long Luo,et al.  A Minor Component Analysis Algorithm , 1997, Neural Networks.

[10]  Gene H. Golub,et al.  Singular value decomposition and least squares solutions , 1970, Milestones in Matrix Computation.

[11]  M. Swamy,et al.  A constrained anti-Hebbian learning algorithm for total least-squares estimation with applications to adaptive FIR and IIR filtering , 1994 .

[12]  Sabine Van Huffel,et al.  Total least squares problem - computational aspects and analysis , 1991, Frontiers in applied mathematics.

[13]  E. Oja,et al.  On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix , 1985 .

[14]  L. Gleser Estimation in a Multivariate "Errors in Variables" Regression Model: Large Sample Results , 1981 .

[15]  Gene H. Golub,et al.  An analysis of the total least squares problem , 1980, Milestones in Matrix Computation.

[16]  Lennart Ljung,et al.  Analysis of recursive stochastic algorithms , 1977 .

[17]  Andrzej Cichocki,et al.  Simplified neural networks for solving linear least squares and total least squares problems in real time , 1994, IEEE Trans. Neural Networks.

[18]  Hussein Baher,et al.  Analog & digital signal processing , 1990 .

[19]  M. Swamy,et al.  Learning algorithm for total least-squares adaptive signal processing , 1992 .

[20]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[21]  Sabine Van Huffel,et al.  The MCA EXIN neuron for the minor component analysis , 2002, IEEE Trans. Neural Networks.

[22]  James Durbin,et al.  Errors in variables , 1954 .

[23]  Andrzej Cichocki,et al.  Neural networks for optimization and signal processing , 1993 .

[24]  Gene H. Golub,et al.  Some modified matrix eigenvalue problems , 1973, Milestones in Matrix Computation.

[25]  Fa-Long Luo,et al.  Applied neural networks for signal processing , 1997 .