暂无分享,去创建一个
[1] David J. C. MacKay,et al. A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.
[2] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[3] Robert B. Fisher,et al. Fine-grained Recognition in the Noisy Wild: Sensitivity Analysis of Convolutional Neural Networks Approaches , 2016, BMVC.
[4] Brendan J. Frey,et al. Variational Learning in Nonlinear Gaussian Belief Networks , 1999, Neural Computation.
[5] Geoffrey E. Hinton,et al. The Helmholtz Machine , 1995, Neural Computation.
[6] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[7] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[9] Ramón Fernández Astudillo,et al. Propagation of Uncertainty Through Multilayer Perceptrons for Robust Automatic Speech Recognition , 2011, INTERSPEECH.
[10] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[11] N. L. Johnson,et al. Multivariate Logistic Distributions , 2005 .
[12] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[13] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[14] Max Welling,et al. Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets , 2014, ICML.
[15] Gareth O. Roberts,et al. A General Framework for the Parametrization of Hierarchical Models , 2007, 0708.3797.
[16] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[17] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[18] Samuel Kotz,et al. Exact Distribution of the Max/Min of Two Gaussian Random Variables , 2008, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.
[19] Tom Minka,et al. Expectation Propagation for approximate Bayesian inference , 2001, UAI.
[20] Radford M. Neal. Connectionist Learning of Belief Networks , 1992, Artif. Intell..
[21] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[22] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[23] Venu Govindaraju,et al. Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks , 2016, ICML.
[24] Boris Flach,et al. Generative learning for deep networks , 2017, ArXiv.
[25] Judea Pearl,et al. Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.
[26] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Sepp Hochreiter,et al. Self-Normalizing Neural Networks , 2017, NIPS.
[28] Boris Flach,et al. Normalization of Neural Networks using Analytic Variance Propagation , 2018, ArXiv.
[29] Ryan P. Adams,et al. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.
[30] David J. C. MacKay,et al. The Evidence Framework Applied to Classification Networks , 1992, Neural Computation.
[31] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[32] Diederik P. Kingma. Fast Gradient-Based Inference with Continuous Latent Variable Models in Auxiliary Form , 2013, ArXiv.
[33] Surya Ganguli,et al. Deep Information Propagation , 2016, ICLR.
[34] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[35] Christopher D. Manning,et al. Fast dropout training , 2013, ICML.