Boosting simple learners
暂无分享,去创建一个
Noga Alon | Shay Moran | Alon Gonen | Elad Hazan | N. Alon | Elad Hazan | Alon Gonen | S. Moran
[1] Wenxin Jiang. Process consistency for AdaBoost , 2003 .
[2] G. Lugosi,et al. On the Bayes-risk consistency of regularized boosting methods , 2003 .
[3] S. Gey. Vapnik–Chervonenkis dimension of axis-parallel cuts , 2012, 1203.0193.
[4] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[5] J. Neumann. Zur Theorie der Gesellschaftsspiele , 1928 .
[6] David Haussler,et al. Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.
[7] Paul A. Viola,et al. Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.
[8] P. Bühlmann,et al. Boosting with the L2-loss: regression and classification , 2001 .
[9] Gilles Blanchard,et al. On the Rate of Convergence of Regularized Boosting Classifiers , 2003, J. Mach. Learn. Res..
[10] Haipeng Luo,et al. Online Gradient Boosting , 2015, NIPS.
[11] J. Friedman. Stochastic gradient boosting , 2002 .
[12] Naman Agarwal,et al. Boosting for Dynamical Systems , 2019, ArXiv.
[13] David Haussler,et al. Sphere Packing Numbers for Subsets of the Boolean n-Cube with Bounded Vapnik-Chervonenkis Dimension , 1995, J. Comb. Theory, Ser. A.
[14] Peter L. Bartlett,et al. Boosting Algorithms as Gradient Descent , 1999, NIPS.
[15] Norbert Sauer,et al. On the Density of Families of Sets , 1972, J. Comb. Theory A.
[16] Peter L. Bartlett,et al. AdaBoost is Consistent , 2006, J. Mach. Learn. Res..
[17] P. Bühlmann,et al. Boosting With the L2 Loss , 2003 .
[18] Robert E. Schapire,et al. A theory of multiclass boosting , 2010, J. Mach. Learn. Res..
[19] Jirí Matousek,et al. Tight upper bounds for the discrepancy of half-spaces , 1995, Discret. Comput. Geom..
[20] R. Schapire. The Strength of Weak Learnability , 1990, Machine Learning.
[21] Leslie G. Valiant,et al. A theory of the learnable , 1984, STOC '84.
[22] Shie Mannor,et al. The Consistency of Greedy Algorithms for Classification , 2002, COLT.
[23] Yoav Freund,et al. Boosting a weak learning algorithm by majority , 1990, COLT '90.
[24] L. Breiman. SOME INFINITY THEORY FOR PREDICTOR ENSEMBLES , 2000 .
[25] Ralph Alexander,et al. Geometric methods in the study of irregularities of distribution , 1990, Comb..
[26] Stanislav Abaimov,et al. Understanding Machine Learning , 2022, Machine Learning for Cyber Agents.
[27] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization , 2003 .
[28] Jirí Matousek,et al. Discrepancy and approximations for bounded VC-dimension , 1993, Comb..
[29] A. Giannopoulos. A NOTE ON THE BANACH-MAZUR DISTANCE TO THE CUBE , 1995 .
[30] Yoav Freund,et al. Boosting: Foundations and Algorithms , 2012 .
[31] Shie Mannor,et al. Weak Learners and Improved Rates of Convergence in Boosting , 2000, NIPS.
[32] David Eisenstat,et al. The VC dimension of k-fold union , 2007, Inf. Process. Lett..
[33] Nabil H. Mustafa,et al. Tight Lower Bounds on the VC-dimension of Geometric Set Systems , 2019, J. Mach. Learn. Res..
[34] P. Assouad. Densité et dimension , 1983 .
[35] L. Breiman. Arcing the edge , 1997 .