Sum Versus Vote Fusion in Multiple Classifier Systems

Amidst the conflicting experimental evidence of superiority of one over the other, we investigate the Sum and majority Vote combining rules in a two class case, under the assumption of experts being of equal strength and estimation errors conditionally independent and identically distributed. We show, analytically, that, for Gaussian estimation error distributions, Sum always outperforms Vote. For heavy tail distributions, we demonstrate by simulation that Vote may outperform Sum. Results on synthetic data confirm the theoretical predictions. Experiments on real data support the general findings, but also show the effect of the usual assumptions of conditional independence, identical error distributions, and common target outputs of the experts not being fully satisfied.

[1]  G DietterichThomas An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees , 2000 .

[2]  Ahmad Fuad Rezaur Rahman,et al.  Enhancing multiple expert decision combination strategies through exploitation of a priori information sources , 1999 .

[3]  H. Damasio,et al.  IEEE Transactions on Pattern Analysis and Machine Intelligence: Special Issue on Perceptual Organization in Computer Vision , 1998 .

[4]  Adam Krzyżak,et al.  Methods of combining multiple classifiers and their applications to handwriting recognition , 1992, IEEE Trans. Syst. Man Cybern..

[5]  Ching Y. Suen,et al.  Application of majority voting to pattern recognition: an analysis of its behavior and performance , 1997, IEEE Trans. Syst. Man Cybern. Part A.

[6]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[7]  Josef Kittler,et al.  Experimental evaluation of expert fusion strategies , 1999, Pattern Recognit. Lett..

[8]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[9]  Ching Y. Suen,et al.  Building a new generation of handwriting recognition systems , 1993, Pattern Recognit. Lett..

[10]  Josef Kittler,et al.  Combining multiple classifiers by averaging or by multiplying? , 2000, Pattern Recognit..

[11]  Robert P. W. Duin,et al.  Experiments with Classifier Combining Rules , 2000, Multiple Classifier Systems.

[12]  Kagan Tumer,et al.  Analysis of decision boundaries in linearly combined neural classifiers , 1996, Pattern Recognit..

[13]  Sargur N. Srihari,et al.  Decision Combination in Multiple Classifier Systems , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Jiri Matas,et al.  On Combining Classifiers , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Hong Yan,et al.  Comparison of face verification results on the XM2VTFS database , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.