A decision-theoretic generalization of on-line learning and an application to boosting

In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in R. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line. ] 1997 Academic Press

[1]  D. Blackwell An analog of the minimax theorem for vector payoffs. , 1956 .

[2]  James Hannan,et al.  4. APPROXIMATION TO RAYES RISK IN REPEATED PLAY , 1958 .

[3]  Richard M. Dudley,et al.  Some special vapnik-chervonenkis classes , 1981, Discret. Math..

[4]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[5]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1995, COLT '90.

[6]  Vladimir Vovk,et al.  Aggregating strategies , 1990, COLT '90.

[7]  David Haussler,et al.  How to use expert advice , 1993, STOC.

[8]  Manfred K. Warmuth,et al.  Using experts for predicting continuous outcomes , 1994, European Conference on Computational Learning Theory.

[9]  Harris Drucker,et al.  Boosting Performance in Neural Networks , 1993, Int. J. Pattern Recognit. Artif. Intell..

[10]  Yoav Freund,et al.  Data filtering and distribution modeling algorithms for machine learning , 1993 .

[11]  Manfred K. Warmuth,et al.  The Weighted Majority Algorithm , 1994, Inf. Comput..

[12]  Umesh V. Vazirani,et al.  An Introduction to Computational Learning Theory , 1994 .

[13]  Thomas H. Chung,et al.  Approximate methods for sequential decision making using expert advice , 1994, COLT '94.

[14]  David Haussler,et al.  Tight worst-case loss bounds for predicting with expert advice , 1994, EuroCOLT.

[15]  Mark Craven,et al.  Learning Sparse Perceptrons , 1995, NIPS.

[16]  Corinna Cortes,et al.  Boosting Decision Trees , 1995, NIPS.

[17]  Thomas G. Dietterich,et al.  Solving Multiclass Learning Problems via Error-Correcting Output Codes , 1994, J. Artif. Intell. Res..

[18]  Vladimir Vovk,et al.  A game of prediction with expert advice , 1995, COLT '95.

[19]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[20]  T. Cover Universal Portfolios , 1996 .

[21]  J. Ross Quinlan,et al.  Bagging, Boosting, and C4.5 , 1996, AAAI/IAAI, Vol. 1.

[22]  Yoav Freund,et al.  Game theory, on-line prediction and boosting , 1996, COLT '96.

[23]  Leo Breiman,et al.  Bias, Variance , And Arcing Classifiers , 1996 .