暂无分享,去创建一个
Jascha Sohl-Dickstein | Jeffrey Pennington | Roman Novak | Yasaman Bahri | Daniel A. Abolafia | J. Sohl-Dickstein | Jeffrey Pennington | Roman Novak | Yasaman Bahri | Jascha Narain Sohl-Dickstein
[1] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[2] Anders Krogh,et al. A Simple Weight Decay Can Improve Generalization , 1991, NIPS.
[3] Ronald L. Rivest,et al. Training a 3-node neural network is NP-complete , 1988, COLT '88.
[4] James O. Berger,et al. Ockham's Razor and Bayesian Analysis , 1992 .
[5] David J. C. MacKay,et al. Bayesian Interpolation , 1992, Neural Computation.
[6] David J. C. MacKay,et al. A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.
[7] Carl E. Rasmussen,et al. Derivative Observations in Gaussian Process Models of Dynamic Systems , 2002, NIPS.
[8] Zoubin Ghahramani,et al. A note on the evidence and Bayesian Occam's razor , 2005 .
[9] Karen Drukker,et al. A study of the effect of noise injection on the training of artificial neural networks , 2009, 2009 International Joint Conference on Neural Networks.
[10] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[11] A. Krizhevsky. Convolutional Deep Belief Networks on CIFAR-10 , 2010 .
[12] Pascal Vincent,et al. Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.
[13] Shie Mannor,et al. Robustness and generalization , 2010, Machine Learning.
[14] Razvan Pascanu,et al. On the number of response regions of deep feed forward networks with piece-wise linear activations , 2013, 1312.6098.
[15] Surya Ganguli,et al. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization , 2014, NIPS.
[16] Razvan Pascanu,et al. On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.
[17] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[18] Ryota Tomioka,et al. In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning , 2014, ICLR.
[19] Oriol Vinyals,et al. Qualitatively characterizing neural network optimization problems , 2014, ICLR.
[20] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[21] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[22] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[23] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[24] Matus Telgarsky,et al. Representation Benefits of Deep Feedforward Networks , 2015, ArXiv.
[25] Misha Denil,et al. Noisy Activation Functions , 2016, ICML.
[26] Surya Ganguli,et al. Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.
[27] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[28] Eugenio Culurciello,et al. An Analysis of Deep Neural Network Models for Practical Applications , 2016, ArXiv.
[29] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[30] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Gintare Karolina Dziugaite,et al. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data , 2017, UAI.
[32] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[33] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[34] D. Sculley,et al. Google Vizier: A Service for Black-Box Optimization , 2017, KDD.
[35] Matus Telgarsky,et al. Spectrally-normalized margin bounds for neural networks , 2017, NIPS.
[36] Nathan Srebro,et al. Exploring Generalization in Deep Learning , 2017, NIPS.
[37] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[38] Guillermo Sapiro,et al. Robust Large Margin Deep Neural Networks , 2016, IEEE Transactions on Signal Processing.
[39] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[40] Jaehoon Lee,et al. Deep Neural Networks as Gaussian Processes , 2017, ICLR.
[41] David A. McAllester,et al. A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks , 2017, ICLR.
[42] Quoc V. Le,et al. Intriguing Properties of Adversarial Examples , 2017, ICLR.
[43] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[44] Shie Mannor,et al. Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms , 2016, ICLR.
[45] Yann Chevaleyre,et al. On the Expressive Power of Deep Fully Circulant Neural Networks , 2019, ArXiv.