暂无分享,去创建一个
Bernhard Pfahringer | Michael J. Cree | Eibe Frank | Henry Gouk | Henry Gouk | Eibe Frank | B. Pfahringer | M. Cree | Henry G. R. Gouk | E. Frank | H. Gouk | Bernhard Pfahringer
[1] Bruce H. Edwards,et al. Elementary linear algebra , 1988 .
[2] Peter L. Bartlett,et al. The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network , 1998, IEEE Trans. Inf. Theory.
[3] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[4] C. Pugh. Real Mathematical Analysis , 2003 .
[5] Eibe Frank,et al. Evaluating the Replicability of Significance Tests for Comparing Learning Algorithms , 2004, PAKDD.
[6] Pierre Geurts,et al. Closed-form dual perturb and combine for tree-based models , 2005, ICML.
[7] Janez Demsar,et al. Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..
[8] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[9] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[10] Shie Mannor,et al. Robustness and generalization , 2010, Machine Learning.
[11] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[12] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[13] Shai Ben-David,et al. Understanding Machine Learning: From Theory to Algorithms , 2014 .
[14] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[15] Max Welling,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS 2015.
[16] Sébastien Bubeck,et al. Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..
[17] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[18] Chris Eliasmith,et al. Hyperopt: a Python library for model selection and hyperparameter optimization , 2015 .
[19] Davide Anguita,et al. Tikhonov, Ivanov and Morozov regularization for support vector machine learning , 2015, Machine Learning.
[20] Ruslan Salakhutdinov,et al. Path-SGD: Path-Normalized Optimization in Deep Neural Networks , 2015, NIPS.
[21] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[22] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[23] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[24] Yoram Singer,et al. Train faster, generalize better: Stability of stochastic gradient descent , 2015, ICML.
[25] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[26] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[28] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[29] Alex Kendall,et al. Concrete Dropout , 2017, NIPS.
[30] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[31] Lacra Pavel,et al. On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning , 2017, ArXiv.
[32] Behnam Neyshabur,et al. Implicit Regularization in Deep Learning , 2017, ArXiv.
[33] Twan van Laarhoven,et al. L2 Regularization versus Batch and Weight Normalization , 2017, ArXiv.
[34] Maneesh Kumar Singh,et al. Lipschitz Properties for Deep Convolutional Networks , 2017, ArXiv.
[35] Matus Telgarsky,et al. Spectrally-normalized margin bounds for neural networks , 2017, NIPS.
[36] Yuichi Yoshida,et al. Spectral Norm Regularization for Improving the Generalizability of Deep Learning , 2017, ArXiv.
[37] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[38] Sashank J. Reddi,et al. On the Convergence of Adam and Beyond , 2018, ICLR.
[39] Masashi Sugiyama,et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks , 2018, NeurIPS.
[40] Ohad Shamir,et al. Size-Independent Sample Complexity of Neural Networks , 2017, COLT.
[41] Bernhard Pfahringer,et al. MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes , 2018, ECML/PKDD.
[42] David A. McAllester,et al. A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks , 2017, ICLR.
[43] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[44] Philip M. Long,et al. The Singular Values of Convolutional Layers , 2018, ICLR.
[45] Maneesh Kumar Singh,et al. On Lipschitz Bounds of General Convolutional Neural Networks , 2018, IEEE Transactions on Information Theory.