Simplification of stochastic fastest NLMS algorithm
暂无分享,去创建一个
The behavior of the normalized LMS (NLMS) algorithm is unstable when all elements of an input state vector are very small. This property is caused by the division in the procedure of the NLMS algorithm. It has been taken as one of the countermeasures against the above problem that coefficients of an adaptive filter are not updated when a norm of an input state vector is smaller than a threshold. The guarantee value (least upper bound of the estimation error) and the stochastic fastest convergence step gain with interruption of coefficients update have been shown. In this paper, we show the simplification of the above method. As a result, the proposed algorithm has no need for statistics which are difficult to obtain under normal conditions.
[1] S. Tsujii,et al. Improvement in stability and convergence speed on normalized LMS algorithm , 1995, Proceedings of ISCAS'95 - International Symposium on Circuits and Systems.
[2] B. Anderson,et al. Performance of adaptive estimation algorithms in dependent random environments , 1980 .
[3] J. Nagumo,et al. A learning method for system identification , 1967, IEEE Transactions on Automatic Control.