暂无分享,去创建一个
Noga Alon | Shay Moran | Alon Gonen | Elad Hazan | N. Alon | Elad Hazan | Alon Gonen | S. Moran
[1] J. Neumann. Zur Theorie der Gesellschaftsspiele , 1928 .
[2] Norbert Sauer,et al. On the Density of Families of Sets , 1972, J. Comb. Theory A.
[3] P. Assouad. Densité et dimension , 1983 .
[4] Leslie G. Valiant,et al. A theory of the learnable , 1984, STOC '84.
[5] David Haussler,et al. Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.
[6] Ralph Alexander,et al. Geometric methods in the study of irregularities of distribution , 1990, Comb..
[7] Yoav Freund,et al. Boosting a weak learning algorithm by majority , 1990, COLT '90.
[8] Jirí Matousek,et al. Discrepancy and approximations for bounded VC-dimension , 1993, Comb..
[9] A. Giannopoulos. A NOTE ON THE BANACH-MAZUR DISTANCE TO THE CUBE , 1995 .
[10] Jirí Matousek,et al. Tight upper bounds for the discrepancy of half-spaces , 1995, Discret. Comput. Geom..
[11] David Haussler,et al. Sphere Packing Numbers for Subsets of the Boolean n-Cube with Bounded Vapnik-Chervonenkis Dimension , 1995, J. Comb. Theory, Ser. A.
[12] L. Breiman. Arcing the edge , 1997 .
[13] Peter L. Bartlett,et al. Boosting Algorithms as Gradient Descent , 1999, NIPS.
[14] L. Breiman. SOME INFINITY THEORY FOR PREDICTOR ENSEMBLES , 2000 .
[15] Shie Mannor,et al. Weak Learners and Improved Rates of Convergence in Boosting , 2000, NIPS.
[16] P. Bühlmann,et al. Boosting with the L2-loss: regression and classification , 2001 .
[17] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[18] Paul A. Viola,et al. Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.
[19] J. Friedman. Stochastic gradient boosting , 2002 .
[20] Shie Mannor,et al. The Consistency of Greedy Algorithms for Classification , 2002, COLT.
[21] P. Bühlmann,et al. Boosting With the L2 Loss , 2003 .
[22] Wenxin Jiang. Process consistency for AdaBoost , 2003 .
[23] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization , 2003 .
[24] G. Lugosi,et al. On the Bayes-risk consistency of regularized boosting methods , 2003 .
[25] Gilles Blanchard,et al. On the Rate of Convergence of Regularized Boosting Classifiers , 2003, J. Mach. Learn. Res..
[26] R. Schapire. The Strength of Weak Learnability , 1990, Machine Learning.
[27] Peter L. Bartlett,et al. AdaBoost is Consistent , 2006, J. Mach. Learn. Res..
[28] David Eisenstat,et al. The VC dimension of k-fold union , 2007, Inf. Process. Lett..
[29] Robert E. Schapire,et al. A theory of multiclass boosting , 2010, J. Mach. Learn. Res..
[30] S. Gey. Vapnik–Chervonenkis dimension of axis-parallel cuts , 2012, 1203.0193.
[31] Yoav Freund,et al. Boosting: Foundations and Algorithms , 2012 .
[32] Haipeng Luo,et al. Online Gradient Boosting , 2015, NIPS.
[33] Nabil H. Mustafa,et al. Tight Lower Bounds on the VC-dimension of Geometric Set Systems , 2019, J. Mach. Learn. Res..
[34] Naman Agarwal,et al. Boosting for Dynamical Systems , 2019, ArXiv.
[35] Stanislav Abaimov,et al. Understanding Machine Learning , 2022, Machine Learning for Cyber Agents.