A Novel Naive Bayes Voting Strategy for Combining Classifiers

Classifier combination methods have proved to be an effective tool for increasing the performance in pattern recognition applications. The rationale of this approach follows from the observation that appropriately diverse classifiers make uncorrelated errors. Unfortunately, this theoretical assumption is not easy to satisfy in practical cases, thus reducing the performance obtainable with any combination strategy. In this paper we propose a new weighted majority vote rule which try to solve this problem by jointly analyzing the responses provided by all the experts, in order to capture their collective behavior when classifying a sample. Our rule associates a weight to each class rather than to each expert and computes such weights by estimating the joint probability distribution of each class with the set of responses provided by all the experts in the combining pool. The probability distribution has been computed by using the naive Bayes probabilistic model. Despite its simplicity, this model has been successfully used in many practical applications, often competing with much more sophisticated techniques. The experimental results, performed by using three standard databases of handwritten digits, confirmed the effectiveness of the proposed method.

[1]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[2]  Shigeo Abe DrEng Pattern Classification , 2001, Springer London.

[3]  David G. Stork,et al.  Pattern Classification , 1973 .

[4]  M. V. Velzen,et al.  Self-organizing maps , 2007 .

[5]  Adam Krzyżak,et al.  Methods of combining multiple classifiers and their applications to handwriting recognition , 1992, IEEE Trans. Syst. Man Cybern..

[6]  Jiri Matas,et al.  On Combining Classifiers , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Ludmila I. Kuncheva,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2004 .

[8]  W. Marsden I and J , 2012 .

[9]  Claudio De Stefano,et al.  An Adaptive Weighted Majority Vote Rule for Combining Multiple Classifiers , 2002, ICPR.

[10]  Robert P. W. Duin,et al.  An experimental study on diversity for bagging and boosting with linear classifiers , 2002, Inf. Fusion.

[11]  Rich Caruana,et al.  An empirical comparison of supervised learning algorithms , 2006, ICML.

[12]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[13]  Irina Rish,et al.  An empirical study of the naive Bayes classifier , 2001 .

[14]  Zied Elouedi,et al.  Naive Bayes vs decision trees in intrusion detection systems , 2004, SAC '04.

[15]  Subhash C. Bagui,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2005, Technometrics.

[16]  Sargur N. Srihari,et al.  Decision Combination in Multiple Classifier Systems , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Fabio Roli,et al.  A theoretical and experimental analysis of linear combiners for multiple classifier systems , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Harry Zhang,et al.  The Optimality of Naive Bayes , 2004, FLAIRS.