Ensemble of classifiers based incremental learning with dynamic voting weight update

An incremental learning algorithm based on weighted majority voting of an ensemble of classifiers is introduced for supervised neural networks, where the voting weights are updated dynamically based on the current test input of unknown class. The algorithm's dynamic voting weight update feature is an enhancement to our previously introduced incremental learning algorithm, Learn++. The algorithm is capable of incrementally learning new information from additional datasets that may later become available, even when the new datasets include instances from additional classes that were not previously seen. Furthermore, the algorithm retains formerly acquired knowledge without requiring access to datasets used earlier, attaining a delicate balance on the stability-plasticity dilemma. The algorithm creates additional ensembles of classifiers based on an iteratively updated distribution function on the training data that favors training with increasingly difficult to learn, previously not learned and/or unseen instances. The final classification is made by weighted majority voting of all classifier outputs in the ensemble, where the voting weights are determined dynamically during actual testing, based on the estimated performance of each classifier on the current test data instance. We present the algorithm in its entirety, as well as its promising simulation results on two real world applications.

[1]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[2]  James R. Williamson,et al.  Gaussian ARTMAP: A Neural Network for Fast Incremental Learning of Noisy Multidimensional Maps , 1996, Neural Networks.

[3]  Stephen Grossberg,et al.  Nonlinear neural networks: Principles, mechanisms, and architectures , 1988, Neural Networks.

[4]  Lars Kai Hansen,et al.  Neural Network Ensembles , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[6]  Georgios C. Anagnostopoulos,et al.  Ellipsoid ART and ARTMAP for incremental clustering and classification , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[7]  Ludmila I. Kuncheva,et al.  A Theoretical Study on Six Classifier Fusion Strategies , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  Robi Polikar,et al.  Algorithms for enhancing pattern separability, feature selection and incremental learning with appl , 2000 .

[9]  Vasant Honavar,et al.  Learn++: an incremental learning algorithm for supervised neural networks , 2001, IEEE Trans. Syst. Man Cybern. Part C.

[10]  Robi Polikar,et al.  Learn++: a classifier independent incremental learning algorithm for supervised neural networks , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).

[11]  Y.A. Dimitriadis,et al.  Safe-/spl mu/ARTMAP: a new solution for reducing category proliferation in fuzzy ARTMAP , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[12]  Satish S. Udpa,et al.  LEARN++: an incremental learning algorithm for multilayer perceptron networks , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).

[13]  Jiri Matas,et al.  On Combining Classifiers , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Fred Henrik Hamker,et al.  Life-long learning Cell Structures--continuously learning without catastrophic interference , 2001, Neural Networks.

[15]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[16]  Robert A. Jacobs,et al.  Hierarchical Mixtures of Experts and the EM Algorithm , 1993, Neural Computation.

[17]  Stephen Grossberg,et al.  Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps , 1992, IEEE Trans. Neural Networks.

[18]  CHEE PENG LIM,et al.  An Incremental Adaptive Network for On-line Supervised Learning and Probability Estimation , 1997, Neural Networks.

[19]  Manfred K. Warmuth,et al.  The Weighted Majority Algorithm , 1994, Inf. Comput..