Ensemble Learning Methods for Classifying EEG Signals

Bagging, boosting and random subspace are three popular ensemble learning methods, which have already shown effectiveness in many practical classification problems. For electroencephalogram (EEG) signal classification arising in recent brain-computer interface (BCI) research, however, there are almost no reports investigating their feasibilities. This paper systematically evaluates the performance of these three ensemble methods for their new application on EEG signal classification. Experiments are conducted on three BCI subjects with k-nearest neighbor and decision tree as base classifiers. Several valuable conclusions are derived about the feasibility and performance of ensemble methods for classifying EEG signals.

[1]  William Z Rymer,et al.  Guest Editorial Brain–Computer Interface Technology: A Review of the Second International Meeting , 2001 .

[2]  Ian Witten,et al.  Data Mining , 2000 .

[3]  Mykola Pechenizkiy,et al.  Diversity in search strategies for ensemble feature selection , 2005, Inf. Fusion.

[4]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[5]  E Donchin,et al.  Brain-computer interface technology: a review of the first international meeting. , 2000, IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society.

[6]  David G. Stork,et al.  Pattern Classification , 1973 .

[7]  Jdel.R. Millan,et al.  On the need for on-line learning in brain-computer interfaces , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[8]  Eric Bauer,et al.  An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.

[9]  José del R. Millán,et al.  Brain-actuated interaction , 2004, Artif. Intell..

[10]  Agostinho C. Rosa,et al.  Asymmetric hemisphere modeling in an offline brain-computer interface , 2001, IEEE Trans. Syst. Man Cybern. Part C.

[11]  Miguel A. L. Nicolelis,et al.  Actions from thoughts , 2001, Nature.

[12]  G DietterichThomas An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees , 2000 .

[13]  G. Pfurtscheller,et al.  Brain-Computer Interfaces for Communication and Control. , 2011, Communications of the ACM.

[14]  B. Kamousi,et al.  Classification of motor imagery tasks for brain-computer interface applications by means of two equivalent dipoles analysis , 2005, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[15]  Touradj Ebrahimi,et al.  Brain-computer interface in multimedia communication , 2003, IEEE Signal Process. Mag..

[16]  L. Breiman Heuristics of instability and stabilization in model selection , 1996 .

[17]  Tin Kam Ho,et al.  The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .

[19]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[20]  C.W. Anderson,et al.  Comparison of linear, nonlinear, and feature selection methods for EEG signal classification , 2003, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[21]  K.-R. Muller,et al.  Linear and nonlinear methods for brain-computer interfaces , 2003, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[22]  Shiliang Sun,et al.  Learning On-Line Classification via Decorrelated LMS Algorithm: Application to Brain-Computer Interfaces , 2005, Discovery Science.

[23]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[24]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[25]  Thomas G. Dietterich Machine-Learning Research , 1997, AI Mag..