An importance weighted projection method for incremental learning under unstationary environments

In this paper, we propose a new projection method for incremental learning on a fixed number of kernels. If the number of the kernels reaches the upper bound, the learning machine has to dispose of part of the memory in varying degrees to make space for the recording of a new instance. If we assume that the environment is ergodic, where the learned samples will appear again later, the learning machine should minimize the disposing ratio to yield a correct response when it encounters the learned samples. To achieve this goal, we reconstruct a kernel-based projection method that minimizes the magnitude of forgetting as well as the current error to the new instance. Next, the method is extended to take into account the sample distribution. The experimental results show that the proposed method is superior to other kernel based learning methods for minimizing the mean square error of the given samples.

[1]  Barbara Caputo,et al.  The projectron: a bounded kernel-based Perceptron , 2008, ICML '08.

[2]  William Nick Street,et al.  A streaming ensemble algorithm (SEA) for large-scale classification , 2001, KDD '01.

[3]  Narasimhan Sundararajan,et al.  A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation , 2005, IEEE Transactions on Neural Networks.

[4]  James Theiler,et al.  Accurate On-line Support Vector Regression , 2003, Neural Computation.

[5]  Nikola K. Kasabov,et al.  Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning , 2001, IEEE Trans. Syst. Man Cybern. Part B.

[6]  Si Wu,et al.  A kernel-based Perceptron with dynamic memory , 2012, Neural Networks.

[7]  Y Lu,et al.  A Sequential Learning Scheme for Function Approximation Using Minimal Radial Basis Function Neural Networks , 1997, Neural Computation.

[8]  Shie Mannor,et al.  The kernel recursive least-squares algorithm , 2004, IEEE Transactions on Signal Processing.

[9]  Koichiro Yamauchi,et al.  Pruning with replacement and automatic distance metric detection in limited general regression neural networks , 2011, The 2011 International Joint Conference on Neural Networks.

[10]  R. Polikar,et al.  Multiple Classifiers Based Incremental Learning Algorithm for Learning in Nonstationary Environments , 2007, 2007 International Conference on Machine Learning and Cybernetics.

[11]  Yoram Singer,et al.  The Forgetron: A Kernel-Based Perceptron on a Fixed Budget , 2005, NIPS.

[12]  Naohiro Ishii,et al.  Incremental learning methods with retrieving of interfered patterns , 1999, IEEE Trans. Neural Networks.

[13]  Xin Xu,et al.  Kernel-Based Least Squares Policy Iteration for Reinforcement Learning , 2007, IEEE Transactions on Neural Networks.

[14]  Shigeo Abe,et al.  Incremental learning of feature space and classifier for face recognition , 2005, Neural Networks.

[15]  Koichiro Yamauchi Incremental learning on a budget and its application to quick maximum power point tracking of photovoltaic systems , 2012, SCIS&ISIS.

[16]  Robi Polikar,et al.  Incremental learning in nonstationary environments with controlled forgetting , 2009, 2009 International Joint Conference on Neural Networks.

[17]  Gerhard Widmer,et al.  Learning in the presence of concept drift and hidden contexts , 2004, Machine Learning.

[18]  Koichiro Yamauchi,et al.  Incremental learning on a budget and its application to quick maximum power point tracking of photovoltaic systems , 2012, The 6th International Conference on Soft Computing and Intelligent Systems, and The 13th International Symposium on Advanced Intelligence Systems.

[19]  Mahesan Niranjan,et al.  Pruning with Replacement on Limited Resource Allocating Networks by F-Projections , 1996, Neural Computation.

[20]  Kyosuke Nishida,et al.  Adaptive Classifiers-Ensemble System for Tracking Concept Drift , 2007, 2007 International Conference on Machine Learning and Cybernetics.