Ensembles of Neural Networks through crossover based pattern generation

The goal of an ensemble construction with several neural networks is to achieve better generalization than that of a single neural network. A Neural Network Ensemble (NNE) performs well when the component networks are diverse, so that failure of one is compensated for by others. Training data variation (i.e., different training sets for different networks) is a good source of diversity because the function that a network approximates is learned from its training data. We introduce a new approach to training data variation and propose the Ensemble based on Crossover based Pattern Generation (ECPG). ECPG generates some new training patterns for a particular network; a pair of pattern is generated interchanging some of input feature values in between a pair of selected original patterns. The effectiveness of ECPG was evaluated using several benchmark classification problems, and ECPG was found to achieve better or competitive performance with respect to related conventional methods. With several benefits over conventional methods, crossover based pattern generation appears to be a good technique for ensemble construction

[1]  Xin Yao,et al.  A constructive algorithm for training cooperative neural network ensembles , 2003, IEEE Trans. Neural Networks.

[2]  Lutz Prechelt,et al.  PROBEN 1 - a set of benchmarks and benchmarking rules for neural network training algorithms , 1994 .

[3]  Tin Kam Ho,et al.  The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Kazuyuki Murase,et al.  Ensembles of Artificial Example based Neural Networks , 2010, J. Comput..

[5]  Xin Yao,et al.  Ensemble learning via negative correlation , 1999, Neural Networks.

[6]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[7]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[8]  Mykola Pechenizkiy,et al.  Diversity in search strategies for ensemble feature selection , 2005, Inf. Fusion.

[9]  Xin Yao,et al.  Diversity creation methods: a survey and categorisation , 2004, Inf. Fusion.

[10]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[11]  Kazuyuki Murase,et al.  A Comparative Study of Data Sampling Techniques for Constructing Neural Network Ensembles , 2009, Int. J. Neural Syst..

[12]  Amanda J. C. Sharkey,et al.  On Combining Artificial Neural Nets , 1996, Connect. Sci..

[13]  D. Opitz,et al.  Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..

[14]  Daniel Hernández-Lobato,et al.  Class-switching neural network ensembles , 2008, Neurocomputing.

[15]  Eric Bauer,et al.  An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.

[16]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[17]  A. Floren,et al.  ' " ' " ' " . " ' " " " " " ' " ' " " " " " : ' " 1 , 2001 .

[18]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .