Boosting simple learners

Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are “rules-of-thumbs” from an “easy-to-learn class”. (Schapire and Freund ’12, Shalev-Shwartz and Ben-David ’14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed in order to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire (’95, ’12). Whereas the lower bound shows that Ω(1/γ2) weak hypotheses with γ-margin are sometimes necessary, our new method requires only Õ(1/γ) weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex (“deeper”) aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are “far away” from the class be learned? Towards answering the first question we identify a combinatorial-geometric parameter which captures the expressivity of base-classes in boosting. As a corollary we provide an affirmative answer to the second question for many well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.

[1]  Wenxin Jiang Process consistency for AdaBoost , 2003 .

[2]  G. Lugosi,et al.  On the Bayes-risk consistency of regularized boosting methods , 2003 .

[3]  S. Gey Vapnik–Chervonenkis dimension of axis-parallel cuts , 2012, 1203.0193.

[4]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[5]  J. Neumann Zur Theorie der Gesellschaftsspiele , 1928 .

[6]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[7]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[8]  P. Bühlmann,et al.  Boosting with the L2-loss: regression and classification , 2001 .

[9]  Gilles Blanchard,et al.  On the Rate of Convergence of Regularized Boosting Classifiers , 2003, J. Mach. Learn. Res..

[10]  Haipeng Luo,et al.  Online Gradient Boosting , 2015, NIPS.

[11]  J. Friedman Stochastic gradient boosting , 2002 .

[12]  Naman Agarwal,et al.  Boosting for Dynamical Systems , 2019, ArXiv.

[13]  David Haussler,et al.  Sphere Packing Numbers for Subsets of the Boolean n-Cube with Bounded Vapnik-Chervonenkis Dimension , 1995, J. Comb. Theory, Ser. A.

[14]  Peter L. Bartlett,et al.  Boosting Algorithms as Gradient Descent , 1999, NIPS.

[15]  Norbert Sauer,et al.  On the Density of Families of Sets , 1972, J. Comb. Theory A.

[16]  Peter L. Bartlett,et al.  AdaBoost is Consistent , 2006, J. Mach. Learn. Res..

[17]  P. Bühlmann,et al.  Boosting With the L2 Loss , 2003 .

[18]  Robert E. Schapire,et al.  A theory of multiclass boosting , 2010, J. Mach. Learn. Res..

[19]  Jirí Matousek,et al.  Tight upper bounds for the discrepancy of half-spaces , 1995, Discret. Comput. Geom..

[20]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[21]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[22]  Shie Mannor,et al.  The Consistency of Greedy Algorithms for Classification , 2002, COLT.

[23]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1990, COLT '90.

[24]  L. Breiman SOME INFINITY THEORY FOR PREDICTOR ENSEMBLES , 2000 .

[25]  Ralph Alexander,et al.  Geometric methods in the study of irregularities of distribution , 1990, Comb..

[26]  Stanislav Abaimov,et al.  Understanding Machine Learning , 2022, Machine Learning for Cyber Agents.

[27]  Tong Zhang Statistical behavior and consistency of classification methods based on convex risk minimization , 2003 .

[28]  Jirí Matousek,et al.  Discrepancy and approximations for bounded VC-dimension , 1993, Comb..

[29]  A. Giannopoulos A NOTE ON THE BANACH-MAZUR DISTANCE TO THE CUBE , 1995 .

[30]  Yoav Freund,et al.  Boosting: Foundations and Algorithms , 2012 .

[31]  Shie Mannor,et al.  Weak Learners and Improved Rates of Convergence in Boosting , 2000, NIPS.

[32]  David Eisenstat,et al.  The VC dimension of k-fold union , 2007, Inf. Process. Lett..

[33]  Nabil H. Mustafa,et al.  Tight Lower Bounds on the VC-dimension of Geometric Set Systems , 2019, J. Mach. Learn. Res..

[34]  P. Assouad Densité et dimension , 1983 .

[35]  L. Breiman Arcing the edge , 1997 .