暂无分享,去创建一个
[1] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[2] B. Welford. Note on a Method for Calculating Corrected Sums of Squares and Products , 1962 .
[3] Yee Whye Teh,et al. Consistency and Fluctuations For Stochastic Gradient Langevin Dynamics , 2014, J. Mach. Learn. Res..
[4] Nikolaus Kriegeskorte,et al. Representation of uncertainty in deep neural networks through sampling , 2016, ArXiv.
[5] Yee Whye Teh,et al. Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.
[6] Wei Tang,et al. Ensembling neural networks: Many could be better than all , 2002, Artif. Intell..
[7] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[8] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[9] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[10] Nikolaus Kriegeskorte,et al. Robustly representing uncertainty in deep neural networks through sampling , 2016 .
[11] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[12] David M. Blei,et al. Stochastic Gradient Descent as Approximate Bayesian Inference , 2017, J. Mach. Learn. Res..
[13] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[14] Ryan P. Adams,et al. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.
[15] Kilian Q. Weinberger,et al. Snapshot Ensembles: Train 1, get M for free , 2017, ICLR.