Quantized Kernel Least Mean Square Algorithm

In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the “redundant” data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

[1]  Paulo Sergio Ramirez,et al.  Fundamentals of Adaptive Filtering , 2002 .

[2]  Weifeng Liu,et al.  Kernel Adaptive Filtering: A Comprehensive Introduction , 2010 .

[3]  Johan A. K. Suykens,et al.  Optimized fixed-size kernel models for large data sets , 2010, Comput. Stat. Data Anal..

[4]  E. Eweda Convergence analysis and design of an adaptive filter with finite-bit power-of-two quantized error , 1992 .

[5]  Tareq Y. Al-Naffouri,et al.  Transient analysis of data-normalized adaptive filters , 2003, IEEE Trans. Signal Process..

[6]  Weifeng Liu,et al.  Extended Kernel Recursive Least Squares Algorithm , 2009, IEEE Transactions on Signal Processing.

[7]  N. Aronszajn Theory of Reproducing Kernels. , 1950 .

[8]  Weifeng Liu,et al.  An Information Theoretic Approach of Designing Sparse Kernel Adaptive Filters , 2009, IEEE Transactions on Neural Networks.

[9]  S. Haykin,et al.  Kernel Least‐Mean‐Square Algorithm , 2010 .

[10]  Anthony Widjaja,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.

[11]  S Craciun,et al.  Wireless Transmission of Neural Signals Using Entropy and Mutual Information Compression , 2011, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[12]  Narasimhan Sundararajan,et al.  A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation , 2005, IEEE Transactions on Neural Networks.

[13]  Deniz Erdogmus,et al.  Vector quantization using information theoretic concepts , 2005, Natural Computing.

[14]  Tomaso A. Poggio,et al.  Regularization Theory and Neural Networks Architectures , 1995, Neural Computation.

[15]  C. Richard Johnson,et al.  A comparison of two quantized state adaptive algorithms , 1989, IEEE Trans. Acoust. Speech Signal Process..

[16]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machines , 2002 .

[17]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[18]  Yoram Singer,et al.  The Forgetron: A Kernel-Based Perceptron on a Budget , 2008, SIAM J. Comput..

[19]  Ali H. Sayed,et al.  A unified approach to the steady-state and tracking analyses of adaptive filters , 2001, IEEE Trans. Signal Process..

[20]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..

[21]  Shie Mannor,et al.  The kernel recursive least-squares algorithm , 2004, IEEE Transactions on Signal Processing.

[22]  R. Ladner Entropy-constrained Vector Quantization , 2000 .

[23]  Tareq Y. Al-Naffouri,et al.  Transient analysis of adaptive filters with error nonlinearities , 2003, IEEE Trans. Signal Process..

[24]  Johan A. K. Suykens,et al.  Fixed-size Least Squares Support Vector Machines: A Large Scale Application in Electrical Load Forecasting , 2006, Comput. Manag. Sci..

[25]  T Y Al Naffouri,et al.  TRANSIENT ANALYSIS OF DATANORMALIZED ADAPTIVE FILTERS , 2003 .

[26]  Weifeng Liu,et al.  Kernel Adaptive Filtering , 2010 .

[27]  Mansour A. Aldajani Logarithmic quantization in the least mean squares algorithm , 2008, Digit. Signal Process..

[28]  Lehel Csató,et al.  Sparse On-Line Gaussian Processes , 2002, Neural Computation.

[29]  Weifeng Liu,et al.  Kernel Affine Projection Algorithms , 2008, EURASIP J. Adv. Signal Process..

[30]  Bernhard Schölkopf,et al.  Nonlinear Component Analysis as a Kernel Eigenvalue Problem , 1998, Neural Computation.

[31]  Xiaohong Jiang,et al.  Generalized Two-Hop Relay for Flexible Delay Control in MANETs , 2012, IEEE/ACM Transactions on Networking.

[32]  Sergios Theodoridis,et al.  Online Kernel-Based Classification Using Adaptive Projection Algorithms , 2008, IEEE Transactions on Signal Processing.

[33]  Johan A. K. Suykens,et al.  Subset based least squares subspace regression in RKHS , 2005, Neurocomputing.

[34]  Sergios Theodoridis,et al.  Ieee Transactions on Signal Processing Extension of Wirtinger's Calculus to Reproducing Kernel Hilbert Spaces and the Complex Kernel Lms , 2022 .

[35]  D. Duttweiler Adaptive filter performance with nonlinearities in the correlation multiplier , 1982 .

[36]  Alexander J. Smola,et al.  Online learning with kernels , 2001, IEEE Transactions on Signal Processing.

[37]  John C. Platt A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.

[38]  Tareq Y. Al-Naffouri,et al.  Adaptive Filters with Error Nonlinearities: Mean-Square Analysis and Optimum Design , 2001, EURASIP J. Adv. Signal Process..

[39]  Ping Xue,et al.  Adaptive equalizer using finite-bit power-of-two quantizer , 1986, IEEE Trans. Acoust. Speech Signal Process..

[40]  Azzedine Zerguine,et al.  Channel equalization using simplified least mean-fourth algorithm , 2011, Digit. Signal Process..

[41]  José Carlos M. Bermudez,et al.  Transient and tracking performance analysis of the quantized LMS algorithm for time-varying system identification , 1996, IEEE Trans. Signal Process..