NSLMS: a proportional weight algorithm for sparse adaptive filters

We discuss a proportional weight algorithm that is similar to least mean square (LMS). The distinction is that the new algorithm (called normalized sparse LMS, or NSLMS) has a time-varying vector step size, whose coefficients are proportional to the magnitudes of the current values of the tap estimates. We show that when the system to be identified is sparse, NSLMS converges faster than LMS (to the same asymptotic MMSE for both algorithms). We also discuss the effect of the initialization on the performance of NSLMS.

[1]  Robert C. Williamson,et al.  Convergence of exponentiated gradient algorithms , 2001, IEEE Trans. Signal Process..

[2]  L. Greenstein,et al.  Tap-selectable decision-feedback equalization , 1997, IEEE Trans. Commun..

[3]  Tyseer Aboulnasr,et al.  Complexity reduction of the NLMS algorithm via selective coefficient update , 1999, IEEE Trans. Signal Process..

[4]  Robert C. Williamson,et al.  Riemannian structure of some new gradient descent learning algorithms , 2000, Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373).

[5]  C. Richard Johnson,et al.  Exploiting sparsity in adaptive filters , 2002, IEEE Trans. Signal Process..

[6]  Manfred K. Warmuth,et al.  Exponentiated Gradient Versus Gradient Descent for Linear Predictors , 1997, Inf. Comput..

[7]  B. Widrow,et al.  Stationary and nonstationary learning characteristics of the LMS adaptive filter , 1976, Proceedings of the IEEE.