暂无分享,去创建一个
[1] Sayan Mukherjee,et al. Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization , 2006, Adv. Comput. Math..
[2] André Elisseeff,et al. Stability and Generalization , 2002, J. Mach. Learn. Res..
[3] Tengyuan Liang,et al. On the Risk of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels , 2019, ArXiv.
[4] Mikhail Belkin,et al. Reconciling modern machine-learning practice and the classical bias–variance trade-off , 2018, Proceedings of the National Academy of Sciences.
[5] Lorenzo Rosasco,et al. Theory III: Dynamics and Generalization in Deep Networks , 2019, ArXiv.
[6] Tengyuan Liang,et al. Just Interpolate: Kernel "Ridgeless" Regression Can Generalize , 2018, The Annals of Statistics.
[7] Alexander Rakhlin,et al. Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon , 2018, COLT.
[8] Mikhail Belkin,et al. To understand deep learning we need to understand kernel learning , 2018, ICML.
[9] Felipe Cucker,et al. On the mathematical foundations of learning , 2001 .
[10] Noureddine El Karoui,et al. The spectrum of kernel random matrices , 2010, 1001.0492.
[11] S. Barry Cooper,et al. Rounding-off Errors in Matrix Processes , 2013 .
[12] Tengyuan Liang,et al. On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels , 2019, COLT.
[13] Tomaso A. Poggio,et al. Fisher-Rao Metric, Geometry, and Complexity of Neural Networks , 2017, AISTATS.
[14] Zizhong Chen,et al. Condition Numbers of Gaussian Random Matrices , 2005, SIAM J. Matrix Anal. Appl..
[15] Felipe Cucker,et al. Condition - The Geometry of Numerical Algorithms , 2013, Grundlehren der mathematischen Wissenschaften.
[16] M. Rudelson,et al. The smallest singular value of a random rectangular matrix , 2008, 0802.3956.
[17] Andrew M. Saxe,et al. High-dimensional dynamics of generalization error in neural networks , 2017, Neural Networks.
[18] Andrea Montanari,et al. The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve , 2019, Communications on Pure and Applied Mathematics.
[19] V. Marčenko,et al. DISTRIBUTION OF EIGENVALUES FOR SOME SETS OF RANDOM MATRICES , 1967 .
[20] Nathan Srebro,et al. Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models , 2019, ICML.
[21] Kaifeng Lyu,et al. Gradient Descent Maximizes the Margin of Homogeneous Neural Networks , 2019, ICLR.
[22] T. Poggio,et al. General conditions for predictivity in learning theory , 2004, Nature.
[23] Andrea Montanari,et al. Surprises in High-Dimensional Ridgeless Least Squares Interpolation , 2019, Annals of statistics.
[24] Olivier Bousquet,et al. Sharper bounds for uniformly stable algorithms , 2019, COLT.
[25] Lorenzo Rosasco,et al. Learning with Incremental Iterative Regularization , 2014, NIPS.
[26] Mikhail Belkin,et al. Two models of double descent for weak features , 2019, SIAM J. Math. Data Sci..