Boosting simple learners

Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are “rules-of-thumbs” from an “easy-to-learn class”. (Schapire and Freund ’12, Shalev-Shwartz and Ben-David ’14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed in order to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire (’95, ’12). Whereas the lower bound shows that Ω(1/γ2) weak hypotheses with γ-margin are sometimes necessary, our new method requires only Õ(1/γ) weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex (“deeper”) aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are “far away” from the class be learned? Towards answering the first question we identify a combinatorial-geometric parameter which captures the expressivity of base-classes in boosting. As a corollary we provide an affirmative answer to the second question for many well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.

[1]  J. Neumann Zur Theorie der Gesellschaftsspiele , 1928 .

[2]  Norbert Sauer,et al.  On the Density of Families of Sets , 1972, J. Comb. Theory A.

[3]  P. Assouad Densité et dimension , 1983 .

[4]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[5]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[6]  Ralph Alexander,et al.  Geometric methods in the study of irregularities of distribution , 1990, Comb..

[7]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1990, COLT '90.

[8]  Jirí Matousek,et al.  Discrepancy and approximations for bounded VC-dimension , 1993, Comb..

[9]  A. Giannopoulos A NOTE ON THE BANACH-MAZUR DISTANCE TO THE CUBE , 1995 .

[10]  Jirí Matousek,et al.  Tight upper bounds for the discrepancy of half-spaces , 1995, Discret. Comput. Geom..

[11]  David Haussler,et al.  Sphere Packing Numbers for Subsets of the Boolean n-Cube with Bounded Vapnik-Chervonenkis Dimension , 1995, J. Comb. Theory, Ser. A.

[12]  L. Breiman Arcing the edge , 1997 .

[13]  Peter L. Bartlett,et al.  Boosting Algorithms as Gradient Descent , 1999, NIPS.

[14]  L. Breiman SOME INFINITY THEORY FOR PREDICTOR ENSEMBLES , 2000 .

[15]  Shie Mannor,et al.  Weak Learners and Improved Rates of Convergence in Boosting , 2000, NIPS.

[16]  P. Bühlmann,et al.  Boosting with the L2-loss: regression and classification , 2001 .

[17]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[18]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[19]  J. Friedman Stochastic gradient boosting , 2002 .

[20]  Shie Mannor,et al.  The Consistency of Greedy Algorithms for Classification , 2002, COLT.

[21]  P. Bühlmann,et al.  Boosting With the L2 Loss , 2003 .

[22]  Wenxin Jiang Process consistency for AdaBoost , 2003 .

[23]  Tong Zhang Statistical behavior and consistency of classification methods based on convex risk minimization , 2003 .

[24]  G. Lugosi,et al.  On the Bayes-risk consistency of regularized boosting methods , 2003 .

[25]  Gilles Blanchard,et al.  On the Rate of Convergence of Regularized Boosting Classifiers , 2003, J. Mach. Learn. Res..

[26]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[27]  Peter L. Bartlett,et al.  AdaBoost is Consistent , 2006, J. Mach. Learn. Res..

[28]  David Eisenstat,et al.  The VC dimension of k-fold union , 2007, Inf. Process. Lett..

[29]  Robert E. Schapire,et al.  A theory of multiclass boosting , 2010, J. Mach. Learn. Res..

[30]  S. Gey Vapnik–Chervonenkis dimension of axis-parallel cuts , 2012, 1203.0193.

[31]  Yoav Freund,et al.  Boosting: Foundations and Algorithms , 2012 .

[32]  Haipeng Luo,et al.  Online Gradient Boosting , 2015, NIPS.

[33]  Nabil H. Mustafa,et al.  Tight Lower Bounds on the VC-dimension of Geometric Set Systems , 2019, J. Mach. Learn. Res..

[34]  Naman Agarwal,et al.  Boosting for Dynamical Systems , 2019, ArXiv.

[35]  Stanislav Abaimov,et al.  Understanding Machine Learning , 2022, Machine Learning for Cyber Agents.