Boosting Classifiers Regionally

This paper presents a new algorithm for Boosting the performance of an ensemble of classifiers. In Boosting, a series of classifiers is used to predict the class of data where later members of the series concentrate on training data that is incorrectly predicted by earlier members. To make a prediction about a new pattern, each classifier predicts the class of the pattern and these predictions are then combined. In standard Boosting, the predictions are combined by weighting the predictions by a term related to the accuracy of the classifier on the training data. This approach ignores the fact that later classifiers focus on small subsets of the patterns and thus may only be good at classifying similar patterns. In RegionBoost, this problem is addressed by weighting each classifier's predictions by a factor measuring how well that classifier performs on similar patterns. In this paper we examine several methods for determining how well a classifier performs on similar patterns. Empirical tests indicate RegionBoost produces gains in performance for some data sets and has little effect on others.

[1]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[2]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[3]  R. Clemen Combining forecasts: A review and annotated bibliography , 1989 .

[4]  Kevin W. Bowyer,et al.  Combination of multiple classifiers using local accuracy estimates , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[5]  Ethem Alpaydin,et al.  Multiple networks for function learning , 1993, IEEE International Conference on Neural Networks.

[6]  Corinna Cortes,et al.  Boosting Decision Trees , 1995, NIPS.

[7]  D. Opitz,et al.  Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..

[8]  Jude W. Shavlik,et al.  Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks , 1995, IJCAI.

[9]  Anders Krogh,et al.  Neural Network Ensembles, Cross Validation, and Active Learning , 1994, NIPS.

[10]  David W. Opitz,et al.  Generating Accurate and Diverse Members of a Neural-Network Ensemble , 1995, NIPS.

[11]  Richard Maclin,et al.  Ensembles as a Sequence of Classifiers , 1997, IJCAI.

[12]  Galina L. Rogova,et al.  Combining the results of several neural network classifiers , 1994, Neural Networks.

[13]  J. Ross Quinlan,et al.  Bagging, Boosting, and C4.5 , 1996, AAAI/IAAI, Vol. 1.

[14]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[15]  David H. Wolpert,et al.  Stacked generalization , 1992, Neural Networks.

[16]  Leo Breiman,et al.  Bias, Variance , And Arcing Classifiers , 1996 .

[17]  J. Mesirov,et al.  Hybrid system for protein secondary structure prediction. , 1992, Journal of molecular biology.

[18]  David W. Opitz,et al.  An Empirical Evaluation of Bagging and Boosting , 1997, AAAI/IAAI.

[19]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.

[20]  Josef Skrzypek,et al.  Synergy of Clustering Multiple Back Propagation Networks , 1989, NIPS.