An Approach for Selective Ensemble Feature Selection Based on Rough Set Theory

Rough set based knowledge reduction is an important method for feature selection. Ensemble methods are learning algorithms that construct a set of base classifiers and then classify new objects by integrating the prediction of the base classifiers. In this paper, an approach for selective ensemble feature selection based on rough set theory is proposed, which meets the tradeoff between the accuracy and diversity of base classifiers. In our simulation experiments on the UCI datasets, high recognition rates are resulted.

[1]  Miao Duo,et al.  A HEURISTIC ALGORITHM FOR REDUCTION OF KNOWLEDGE , 1999 .

[2]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1995, COLT '90.

[3]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[4]  Andrzej Skowron,et al.  Decision Algorithms: A Survey of Rough Set - Theoretic Methods , 1997, Fundam. Informaticae.

[5]  Xin Yao,et al.  Diversity creation methods: a survey and categorisation , 2004, Inf. Fusion.

[6]  Thomas G. Dietterich,et al.  Solving Multiclass Learning Problems via Error-Correcting Output Codes , 1994, J. Artif. Intell. Res..

[7]  A. Farina,et al.  International Conference on Information Fusion ( FUSION ) A Performance Bound for Manoeuvring Target Tracking Using Best-Fitting Gaussian Distrbutions , 2006 .

[8]  Zdzislaw Pawlak,et al.  Rough classification , 1984, Int. J. Hum. Comput. Stud..

[9]  G DietterichThomas An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees , 2000 .

[10]  Ludmila I. Kuncheva,et al.  Relationships between combination methods and measures of diversity in combining classifiers , 2002, Inf. Fusion.

[11]  Guoyin Wang,et al.  Rule generation based on rough set theory , 2000, SPIE Defense + Commercial Sensing.

[12]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[13]  Jerzy W. Grzymala-Busse,et al.  Rough Sets , 1995, Commun. ACM.

[14]  Thomas G. Dietterich Machine-Learning Research , 1997, AI Mag..

[15]  Ludmila I. Kuncheva,et al.  That Elusive Diversity in Classifier Ensembles , 2003, IbPRIA.

[16]  David W. Opitz,et al.  Feature Selection for Ensembles , 1999, AAAI/IAAI.

[17]  Thomas G. Dietterich Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.

[18]  Mykola Pechenizkiy,et al.  Diversity in search strategies for ensemble feature selection , 2005, Inf. Fusion.

[19]  Ludmila I. Kuncheva,et al.  Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy , 2003, Machine Learning.

[20]  Xiaohua Hu,et al.  Learning maximal generalized decision rules via discretization, generalization and rough set feature selection , 1997, Proceedings Ninth IEEE International Conference on Tools with Artificial Intelligence.

[21]  Ning Zhong,et al.  Using Rough Sets with Heuristics for Feature Selection , 1999, Journal of Intelligent Information Systems.

[22]  Fabio Roli,et al.  Design of effective neural network ensembles for image classification purposes , 2001, Image Vis. Comput..