Classifier ensembles using Boosting with Mixed Learner Models (BMLM)

Bagging and Boosting are most famous classifier ensemble methods which have been used in a number of Pattern Classification applications. In this paper, an alternative approach for Classifier Ensembles by using Boosting method has been proposed. Usually boosting is used to boost the performance of a single base classifier. Boosting (BMLM) used to enhance the performance of the base classifier that is trained with the split numerical and categorical features of the same dataset has been carried out in this paper. BMLM is applied to the classification of three UCI (University of California, Irvine) datasets. Diversity between the base learners has also been calculated which holds good on increasing the recognition rates of the classifiers. It is seen that, the results of BMLM have shown up to 3% increase in classification accuracy than that of base classifier and single boosted classifier.

[1]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[2]  B. Ripley,et al.  Pattern Recognition , 1968, Nature.

[3]  Gavin Brown Some Thoughts at the Interface of Ensemble Methods and Feature Selection , 2010, MCS.

[4]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[5]  Ludmila I. Kuncheva Diversity in multiple classifier systems , 2005, Inf. Fusion.

[6]  Hongbo Shi,et al.  The Naïve Bayesian Classifier Learning Algorithm Based on Adaboost and Parameter Expectations , 2010, 2010 Third International Joint Conference on Computational Science and Optimization.

[7]  Ludmila I. Kuncheva,et al.  Relationships between combination methods and measures of diversity in combining classifiers , 2002, Inf. Fusion.

[8]  Gavin Brown,et al.  Learn++.MF: A random subspace approach for the missing feature problem , 2010, Pattern Recognit..

[9]  R. Polikar,et al.  Ensemble based systems in decision making , 2006, IEEE Circuits and Systems Magazine.

[10]  Shou-De Lin,et al.  An Ensemble of Three Classifiers for KDD Cup 2009: Expanded Linear Model, Heterogeneous Boosting, and Selective Naive Bayes , 2009, KDD Cup.

[11]  Jin-Mao Wei,et al.  Ensemble Rough Hypercuboid Approach for Classifying Cancers , 2010, IEEE Transactions on Knowledge and Data Engineering.

[12]  Ludmila I. Kuncheva,et al.  Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy , 2003, Machine Learning.

[13]  Subhash C. Bagui,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2005, Technometrics.