暂无分享,去创建一个
[1] Jascha Sohl-Dickstein,et al. Sensitivity and Generalization in Neural Networks: an Empirical Study , 2018, ICLR.
[2] John Shawe-Taylor,et al. PAC-Bayes & Margins , 2002, NIPS.
[3] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[4] Lise Getoor,et al. Stability and Generalization in Structured Prediction , 2016, J. Mach. Learn. Res..
[5] Nathan Srebro,et al. The Implicit Bias of Gradient Descent on Separable Data , 2017, J. Mach. Learn. Res..
[6] Matus Telgarsky,et al. Spectrally-normalized margin bounds for neural networks , 2017, NIPS.
[7] J. Zico Kolter,et al. Generalization in Deep Networks: The Role of Distance from Initialization , 2019, ArXiv.
[8] Yi Zhang,et al. Stronger generalization bounds for deep nets via a compression approach , 2018, ICML.
[9] Yuanzhi Li,et al. Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data , 2018, NeurIPS.
[10] Ohad Shamir,et al. Size-Independent Sample Complexity of Neural Networks , 2017, COLT.
[11] Ryota Tomioka,et al. In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning , 2014, ICLR.
[12] John Langford,et al. (Not) Bounding the True Error , 2001, NIPS.
[13] Gintare Karolina Dziugaite,et al. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data , 2017, UAI.
[14] Shai Shalev-Shwartz,et al. SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data , 2017, ICLR.
[15] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[16] David A. McAllester. Some PAC-Bayesian theorems , 1998, COLT' 98.
[17] Thore Graepel,et al. A PAC-Bayesian Margin Bound for Linear Classifiers: Why SVMs work , 2000, NIPS.
[18] Geoffrey E. Hinton,et al. Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.
[19] Nathan Srebro,et al. Exploring Generalization in Deep Learning , 2017, NIPS.
[20] Jürgen Schmidhuber,et al. Flat Minima , 1997, Neural Computation.
[21] David A. McAllester. PAC-Bayesian model averaging , 1999, COLT '99.
[22] David A. McAllester,et al. A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks , 2017, ICLR.
[23] Joel A. Tropp,et al. User-Friendly Tail Bounds for Sums of Random Matrices , 2010, Found. Comput. Math..
[24] David A. McAllester. Simplified PAC-Bayesian Margin Bounds , 2003, COLT.