暂无分享,去创建一个
[1] A. Gelman,et al. Adaptively Scaling the Metropolis Algorithm Using Expected Squared Jumped Distance , 2007 .
[2] M. Girolami,et al. Riemann manifold Langevin and Hamiltonian Monte Carlo methods , 2011, Journal of the Royal Statistical Society: Series B (Statistical Methodology).
[3] Yarin Gal,et al. Dropout Inference in Bayesian Neural Networks with Alpha-divergences , 2017, ICML.
[4] Misha Denil,et al. Learned Optimizers that Scale and Generalize , 2017, ICML.
[5] Finale Doshi-Velez,et al. Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks , 2016, ICLR.
[6] Tianqi Chen,et al. Stochastic Gradient Hamiltonian Monte Carlo , 2014, ICML.
[7] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[8] Yoshua Bengio,et al. On the Optimization of a Synaptic Learning Rule , 2007 .
[9] Ryan Babbush,et al. Bayesian Sampling Using Stochastic Gradient Thermostats , 2014, NIPS.
[10] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[11] Stefano Ermon,et al. A-NICE-MC: Adversarial Training for MCMC , 2017, NIPS.
[12] Yee Whye Teh,et al. Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.
[13] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[14] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[15] Jascha Sohl-Dickstein,et al. Generalizing Hamiltonian Monte Carlo with Neural Networks , 2017, ICLR.
[16] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[17] Ahn. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring , 2012 .
[18] Richard E. Turner,et al. Gradient Estimators for Implicit Models , 2017, ICLR.
[19] Tianqi Chen,et al. Relation of a New Interpretation of Stochastic Differential Equations to Ito Process , 2011, 1111.2987.
[20] Misha Denil,et al. Learning to Learn without Gradient Descent by Gradient Descent , 2016, ICML.
[21] Julien Cornebise,et al. Weight Uncertainty in Neural Network , 2015, ICML.
[22] Zhe Gan,et al. Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Matthew J. Beal. Variational algorithms for approximate Bayesian inference , 2003 .
[24] Renjie Liao,et al. Understanding Short-Horizon Bias in Stochastic Meta-Optimization , 2018, ICLR.
[25] Lawrence Carin,et al. Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks , 2015, AAAI.
[26] Sean Gerrish,et al. Black Box Variational Inference , 2013, AISTATS.
[27] Yoshua Bengio,et al. NICE: Non-linear Independent Components Estimation , 2014, ICLR.
[28] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[29] Ryan P. Adams,et al. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.
[30] Radford M. Neal. Slice Sampling , 2003, The Annals of Statistics.
[31] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[32] Michael I. Jordan,et al. An Introduction to Variational Methods for Graphical Models , 1999, Machine-mediated learning.
[33] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[34] Peter L. Bartlett,et al. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.
[35] Zhe Gan,et al. Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling , 2016, ACL.
[36] Radford M. Neal. MCMC Using Hamiltonian Dynamics , 2011, 1206.1901.
[37] Sebastian Thrun,et al. Learning to Learn , 1998, Springer US.
[38] Jitendra Malik,et al. Learning to Optimize , 2016, ICLR.
[39] Tianqi Chen,et al. A Complete Recipe for Stochastic Gradient MCMC , 2015, NIPS.
[40] Max Welling,et al. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks , 2017, ICML.
[41] Daniel Hernández-Lobato,et al. Black-Box Alpha Divergence Minimization , 2015, ICML.
[42] L. Yin,et al. Existence and construction of dynamical potential in nonequilibrium processes without detailed balance , 2006 .
[43] Bartunov Sergey,et al. Meta-Learning with Memory-Augmented Neural Networks , 2016 .
[44] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[45] Carl E. Rasmussen,et al. PILCO: A Model-Based and Data-Efficient Approach to Policy Search , 2011, ICML.
[46] Yee Whye Teh,et al. Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex , 2013, NIPS.
[47] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[48] A. Kennedy,et al. Hybrid Monte Carlo , 1988 .
[49] David Barber,et al. A Scalable Laplace Approximation for Neural Networks , 2018, ICLR.
[50] Aaron Klein,et al. Bayesian Optimization with Robust Bayesian Neural Networks , 2016, NIPS.
[51] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[52] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[53] Zhe Gan,et al. Bridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization , 2015, AISTATS.
[54] Max Welling,et al. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap , 2014, ICML.
[55] Zeb Kurth-Nelson,et al. Learning to reinforcement learn , 2016, CogSci.
[56] Richard J. Mammone,et al. Meta-neural networks that learn by learning , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.
[57] Nando de Freitas,et al. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning , 2010, ArXiv.