Analysis of convergence of adaptive single­step algorithms for the identification of non­stationary objects

The study deals with the problem of identification of non-stationary parameters of a linear object which can be described by first-order Markovian model, with the help of the simplest in computational terms single-step adaptive identification algorithms – modified algorithms by Kaczmarz and Nagumo-Noda. These algorithms do not require knowledge of information on the degree of non-stationarity of the studied object. When building the model, they use the information only about one step of measurements. Modification involves the use of the regularizing addition in the algorithms to improve their computing properties and avoid division by zero. Using a Markovian model is quite effective because it makes it possible to obtain analytic estimates of the properties of algorithms. It was shown that the use of regularizing additions in identification algorithms, while improving stability of algorithms, leads to some slowdown of the process of model construction. The conditions for convergence of regularizing algorithms by Kaczmarz and Nagumo-Noda at the evaluation of stationary parameters in mean and root-mean-square and existing measurement interference were determined. The obtained estimates differ from the existing ones by higher accuracy. Despite this, they are quite general and depend both on the degree of non-stationarity of an object, and on statistical characteristics of interference. In addition, the expressions for the optimal values of the parameters of algorithms, ensuring their maximum rate of convergence under conditions of non-stationarity and the presence of Gaussian interferences, were determined. The obtained analytical expressions contain a series of unknown parameters (estimation error, degree of non-stationarity of an object, statistical characteristics of interferences). For their practical application, it is necessary to use any recurrent procedure for estimation of these unknown parameters and apply the obtained estimates to refine the parameters that are included in the algorithms

[1]  Jacob Benesty,et al.  An overview on optimized NLMS algorithms for acoustic echo cancellation , 2015, EURASIP J. Adv. Signal Process..

[2]  S. Thomas Alexander,et al.  Adaptive Signal Processing , 1986, Texts and Monographs in Computer Science.

[3]  J. Nagumo,et al.  A learning method for system identification , 1967, IEEE Transactions on Automatic Control.

[4]  Lennart Ljung,et al.  Theory and Practice of Recursive Identification , 1983 .

[5]  Danilo P. Mandic,et al.  Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability , 2001 .

[6]  Ali H. Sayed,et al.  Adaptive Filters , 2008 .

[7]  Sérgio J. M. de Almeida,et al.  A statistical analysis of the affine projection algorithm for unity step size and autoregressive inputs , 2005, IEEE Transactions on Circuits and Systems I: Regular Papers.

[8]  Peter M. Clarkson,et al.  Optimal and Adaptive Signal Processing , 1993 .

[9]  S. Haykin,et al.  Adaptive Filter Theory , 1986 .

[10]  Graham C. Goodwin,et al.  Adaptive filtering prediction and control , 1984 .

[11]  Ali H. Sayed,et al.  Variable step-size NLMS and affine projection algorithms , 2004, IEEE Signal Processing Letters.

[12]  Patrick A. Naylor,et al.  Selective-Tap Adaptive Filtering With Performance Analysis for Identification of Time-Varying Systems , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[13]  Mike Brookes,et al.  Misalignment Performance of Selective Tap Adaptive Algorithms for System Identification of Time-Varying Unknown Systems , 2007, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07.

[14]  C.F.N. Cowan,et al.  Performance comparison of RLS and LMS algorithms for tracking a first order Markov communications channel , 1990, IEEE International Symposium on Circuits and Systems.

[15]  Mahesh Chandra,et al.  Performance Comparison of Different Affine Projection Algorithms for Noise Minimization from Speech Signals , 2017 .

[16]  V. Dupac A DYNAMIC STOCHASTIC APPROXIMATION METHOD , 1965 .

[17]  Jacob Benesty,et al.  On Regularization in Adaptive Filtering , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[18]  Jacob Benesty,et al.  A Variable Step-Size Affine Projection Algorithm Designed for Acoustic Echo Cancellation , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[19]  Emanuel A. P. Habets,et al.  Performance analysis of IPNLMS for identification of time-varying systems , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[20]  Jacob Benesty,et al.  An optimized NLMS algorithm for system identification , 2016, Signal Process..

[21]  Kevin T. Wagner,et al.  Towards analytical convergence analysis of proportionate-type nlms algorithms , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.

[22]  Yanyan Wang,et al.  Norm Penalized Joint-Optimization NLMS Algorithms for Broadband Sparse Adaptive Channel Estimation , 2017, Symmetry.

[23]  S. Kaczmarz Approximate solution of systems of linear equations , 1993 .

[24]  Danilo P. Mandic,et al.  Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability , 2001 .