Empirical Study on Weighted Voting Multiple Classifiers

Combining multiple classifiers is expected to increase classification accuracy. Research on combination strategies of multiple classifiers becomes a popular topic. For a crisp classifier, which returns a discrete class label instead of a set of real-valued probabilities respecting to every classes, the often used combination method is majority voting. Both majority and weighted majority voting are classifier-based voting schemes, which provide a certain base classifier with an identical confidence in voting. However, each classifier should have different voting priorities with respect to its learning space. This differences can not be reflected by classifier-based voting strategy. In this paper, we propose another two voting strategies in an effort to take such differences into consideration. We apply the AdaBoost algorithm to generate multiple classifiers and vary its voting strategy. Then, the prediction ability of each voting strategy is tested and compared on 8 datasets taken from UCI Machine Learning Repository. The experimental results show that one of the proposed voting strategies, namely sample-based voting scheme, achieves better performance in view of classification accuracy.

[1]  Mohamed S. Kamel,et al.  Data Dependence in Combining Classifiers , 2003, Multiple Classifier Systems.

[2]  Yoav Freund,et al.  Boosting the margin: A new explanation for the effectiveness of voting methods , 1997, ICML.

[3]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[4]  I. Good,et al.  Information, weight of evidence, the singularity between probability measures and signal detection , 1974 .

[5]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[6]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1995, COLT '90.

[7]  Luís A. Alexandre,et al.  On combining classifiers using sum and product rules , 2001, Pattern Recognit. Lett..

[8]  Eric Bauer,et al.  An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.

[9]  Yang Wang,et al.  From Association to Classification: Inference Using Weight of Evidence , 2003, IEEE Trans. Knowl. Data Eng..

[10]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[11]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.

[12]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[13]  James C. Bezdek,et al.  Decision templates for multiple classifier fusion: an experimental comparison , 2001, Pattern Recognit..

[14]  Yang Wang,et al.  High-Order Pattern Discovery from Discrete-Valued Data , 1997, IEEE Trans. Knowl. Data Eng..

[15]  Ron Kohavi,et al.  Data Mining Using MLC a Machine Learning Library in C++ , 1996, Int. J. Artif. Intell. Tools.