Experts' Boasting in Trainable Fusion Rules

We consider the trainable fusion rule design problem when the expert classifiers provide crisp outputs and the behavior space knowledge method is used to fuse local experts' decisions. If the training set is utilized to design both the experts and the fusion rule, the experts' outputs become too self-assured. In small sample situations, "optimistically biased" experts' outputs bluffs the fusion rule designer. If the experts differ in complexity and in classification performance, then the experts' boasting effect and can severely degrade the performance of a multiple classification system. Theoretically-based and experimental procedures are suggested to reduce the experts' boasting effect.

[1]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[2]  Michael I. Jordan,et al.  Hierarchical Mixtures of Experts and the EM Algorithm , 1994, Neural Computation.

[3]  David H. Wolpert,et al.  Stacked generalization , 1992, Neural Networks.

[4]  Shigeo Abe DrEng Pattern Classification , 2001, Springer London.

[5]  Ludmila I. Kuncheva,et al.  Feature Subsets for Classifier Combination: An Enumerative Experiment , 2001, Multiple Classifier Systems.

[6]  Kohji Fukunaga,et al.  Introduction to Statistical Pattern Recognition-Second Edition , 1990 .

[7]  Sarunas Raudys Multiple Classification Systems in the Context of Feature Extraction and Selection , 2002, Multiple Classifier Systems.

[8]  Josef Kittler,et al.  Combining classifiers: A theoretical framework , 1998, Pattern Analysis and Applications.

[9]  Marcos Dipinto,et al.  Discriminant analysis , 2020, Predictive Analytics.

[10]  Sarunas Raudys,et al.  Reduction of the Boasting Bias of Linear Experts , 2002, Multiple Classifier Systems.

[11]  UedaNaonori Optimal Linear Combination of Neural Networks for Improving Classification Performance , 2000 .

[12]  Robert P. W. Duin,et al.  K-nearest Neighbors Directed Noise Injection in Multilayer Perceptron Training , 2000, IEEE Trans. Neural Networks Learn. Syst..

[13]  Gian Luca Marcialis,et al.  An Experimental Comparison of Fixed and Trained Fusion Rules for Crisp Classifier Outputs , 2002, Multiple Classifier Systems.

[14]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[15]  Sherif Hashem,et al.  Optimal Linear Combinations of Neural Networks , 1997, Neural Networks.

[16]  Fabio Roli,et al.  Analysis of Linear and Order Statistics Combiners for Fusion of Imbalanced Classifiers , 2002, Multiple Classifier Systems.

[17]  Jiri Matas,et al.  On Combining Classifiers , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  Tin Kam Ho,et al.  Data Complexity Analysis for Classifier Combination , 2001, Multiple Classifier Systems.

[19]  Ching Y. Suen,et al.  A Method of Combining Multiple Experts for the Recognition of Unconstrained Handwritten Numerals , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[20]  Joydeep Ghosh,et al.  Multiclassifier Systems: Back to the Future , 2002, Multiple Classifier Systems.

[21]  Josef Kittler,et al.  Multiple Classifier Systems , 2004, Lecture Notes in Computer Science.

[22]  Keinosuke Fukunaga,et al.  Introduction to statistical pattern recognition (2nd ed.) , 1990 .

[23]  Josef Kittler,et al.  A Framework for Classifier Fusion: Is It Still Needed? , 2000, SSPR/SPR.

[24]  L. Breiman Arcing classifier (with discussion and a rejoinder by the author) , 1998 .

[25]  Adam Krzyżak,et al.  Methods of combining multiple classifiers and their applications to handwriting recognition , 1992, IEEE Trans. Syst. Man Cybern..

[26]  James C. Bezdek,et al.  Decision templates for multiple classifier fusion: an experimental comparison , 2001, Pattern Recognit..

[27]  L. Breiman Arcing Classifiers , 1998 .