A Diversity Production Approach in Ensemble of Base Classifiers

One of crucial issue in the design of combinational classifier systems is to keep diversity in the results of classifiers to reach the appropriate final result. It's obvious that the more diverse the results of the classifiers, the more suitable final result. In this paper a new approach for generating diversity during creation of an ensemble together with a new combining classifier system is proposed. The main idea in this novel system is heuristic retraining of some base classifiers. At first, a basic classifier is run, after that, regards to the drawbacks of this classifier, other base classifiers are retrained heuristically. Each of these classifiers looks at the data with its own attitude. The main attempts in the retrained classifiers are to leverage the error-prone data. The retrained classifiers usually have different votes about the sample points which are close to boundaries and may be likely erroneous. Like all ensemble learning approaches, our ensemble meta-learner approach can be developed based on any base classifiers. The main contributions are to keep some advantages of these classifiers and resolve some of their drawbacks, and consequently to enhance the performance of classification. This study investigates how by focusing on some crucial data points the performance of any base classifier can be reinforced. The paper also proves that adding the number of all "difficult" data points just as boosting method does, does not always make a better training set. Experiments show significant improvements in terms of accuracies of consensus classification. The performance of the proposed algorithm outperforms some of the best methods in the literature. Finally, the authors according to experimental results claim that forcing crucial data points to the training set as well as eliminating them from the training set can lead to the more accurate results, conditionally.

[1]  David W. Opitz,et al.  Actively Searching for an E(cid:11)ective Neural-Network Ensemble , 1996 .

[2]  Jianxin Wu,et al.  Genetic Algorithm based Selective Neural Network Ensemble , 2001, IJCAI.

[3]  Fu Qiang PSO-based approach for neural network ensembles , 2004 .

[4]  William F. Punch,et al.  Effects of resampling method and adaptation on clustering ensemble efficacy , 2011, Artificial Intelligence Review.

[5]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[6]  David G. Stork,et al.  Pattern Classification , 1973 .

[7]  Pablo M. Granitto,et al.  Selecting diverse members of neural network ensembles , 2000, Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks.

[8]  Bruce E. Rosen,et al.  Ensemble Learning Using Decorrelated Neural Networks , 1996, Connect. Sci..

[9]  Louisa Lam,et al.  Classifier Combinations: Implementations and Theoretical Issues , 2000, Multiple Classifier Systems.

[10]  Xin Yao,et al.  Evolutionary ensembles with negative correlation learning , 2000, IEEE Trans. Evol. Comput..

[11]  Hamid Parvin,et al.  A New Clustering Algorithm with the Convergence Proof , 2011, KES.

[12]  David G. Stork,et al.  Pattern Classification (2nd ed.) , 1999 .

[13]  Zoran Obradovic,et al.  Effective pruning of neural network classifier ensembles , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[14]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[15]  Subhash C. Bagui,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2005, Technometrics.

[16]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[17]  Raymond J. Mooney,et al.  Constructing Diverse Classifier Ensembles using Artificial Training Examples , 2003, IJCAI.

[18]  Hamid Parvin,et al.  An innovative combination of particle swarm optimization, learning automaton and great deluge algorithms for dynamic environments , 2011 .

[19]  Anders Krogh,et al.  Neural Network Ensembles, Cross Validation, and Active Learning , 1994, NIPS.

[20]  傅强,et al.  Clustering-based selective neural network ensemble , 2005 .

[21]  L. Breiman Arcing classifier (with discussion and a rejoinder by the author) , 1998 .

[22]  Raymond J. Mooney,et al.  Creating diversity in ensembles using artificial data , 2005, Inf. Fusion.

[23]  L. Breiman Arcing Classifiers , 1998 .

[24]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[25]  Lars Kai Hansen,et al.  Neural Network Ensembles , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  Lakhmi C. Jain,et al.  Knowledge-Based Intelligent Information and Engineering Systems , 2004, Lecture Notes in Computer Science.

[27]  Ludmila I. Kuncheva,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2004 .

[28]  Hamid Parvin,et al.  Linkage learning based on differences in local optimums of building blocks with one optima , 2011 .