Ensembles of Artificial Example based Neural Networks

Ensembles with several neural networks are widely used to improve the generalization performance over a single network. A proper diversity among component networks is considered as an important parameter for ensemble construction so that the failure of one may be compensated by others. Data sampling, i.e., different training sets for different networks, is the most investigated technique for diversity than other approaches. This paper presents a data sampling based neural network ensemble method where the individual networks are trained on the union of original training set and a set of some artificially generated examples. Generated examples are different for different networks and are the element to produce diversity among the networks. After each network is trained, the method checks whether the trained network is suitable to ensemble or not, and absorbs the network based on the ensemble performance with it. The effectiveness of the method is evaluated on a suite of 20 benchmark classification problems. The experimental results show that the performance of this ensemble method is better or competitive with respect to the existing popular methods.

[1]  Xin Yao,et al.  Ensemble learning via negative correlation , 1999, Neural Networks.

[2]  Xin Yao,et al.  Diversity creation methods: a survey and categorisation , 2004, Inf. Fusion.

[3]  Xin Yao,et al.  Simultaneous training of negatively correlated neural networks in an ensemble , 1999, IEEE Trans. Syst. Man Cybern. Part B.

[4]  Xin Yao,et al.  A constructive algorithm for training cooperative neural network ensembles , 2003, IEEE Trans. Neural Networks.

[5]  S. Hyakin,et al.  Neural Networks: A Comprehensive Foundation , 1994 .

[6]  Lawrence O. Hall,et al.  Ensemble diversity measures and their application to thinning , 2004, Inf. Fusion.

[7]  Kazuyuki Murase,et al.  Progressive interactive training: A sequential neural network ensemble learning method , 2009, Neurocomputing.

[8]  Raymond J. Mooney,et al.  Creating diversity in ensembles using artificial data , 2005, Inf. Fusion.

[9]  Mykola Pechenizkiy,et al.  Diversity in search strategies for ensemble feature selection , 2005, Inf. Fusion.

[10]  Eric Bauer,et al.  An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.

[11]  D. Opitz,et al.  Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..

[12]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[13]  Kenneth J. Mackin,et al.  Ensemble of artificial neural network based land cover classifiers using satellite data , 2007, 2007 IEEE International Conference on Systems, Man and Cybernetics.

[14]  D. Sharma,et al.  Learning from Ensembles: Using Artificial Neural Network Ensemble for Medical Outcomes Prediction , 2006, 2006 Innovations in Information Technology.

[15]  Lutz Prechelt,et al.  PROBEN 1 - a set of benchmarks and benchmarking rules for neural network training algorithms , 1994 .

[16]  R. Polikar,et al.  Bootstrap - Inspired Techniques in Computation Intelligence , 2007, IEEE Signal Processing Magazine.

[17]  Michael A. Arbib,et al.  The handbook of brain theory and neural networks , 1995, A Bradford book.

[18]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[19]  Nikunj C. Oza,et al.  Online Ensemble Learning , 2000, AAAI/IAAI.