暂无分享,去创建一个
[1] Kenneth Y. Goldberg,et al. Eigentaste: A Constant Time Collaborative Filtering Algorithm , 2001, Information Retrieval.
[2] B. Efron. The jackknife, the bootstrap, and other resampling plans , 1987 .
[3] David M. Blei,et al. A Variational Analysis of Stochastic Gradient Algorithms , 2016, ICML.
[4] Edwin V. Bonilla,et al. Multi-task Gaussian Process Prediction , 2007, NIPS.
[5] R. Singer,et al. The Audubon Society field guide to North American mushrooms , 1981 .
[6] Zoubin Ghahramani,et al. Variational Gaussian Dropout is not Bayesian , 2017, 1711.02989.
[7] Daniel Hernández-Lobato,et al. Black-Box Alpha Divergence Minimization , 2015, ICML.
[8] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[9] J. Cunningham,et al. Expectation Propagation as a Way of Life , 2020 .
[10] Julien Cornebise,et al. Weight Uncertainty in Neural Network , 2015, ICML.
[11] Benjamin Van Roy,et al. Deep Exploration via Bootstrapped DQN , 2016, NIPS.
[12] Ole Winther,et al. Gaussian Processes for Classification: Mean-Field Algorithms , 2000, Neural Computation.
[13] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[14] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[15] Benjamin Van Roy,et al. Learning to Optimize via Information-Directed Sampling , 2014, NIPS.
[16] Michael I. Jordan,et al. An Introduction to Variational Methods for Graphical Models , 1999, Machine Learning.
[17] Judea Pearl,et al. Fusion, Propagation, and Structuring in Belief Networks , 1986, Artif. Intell..
[18] Zoubin Ghahramani,et al. Sparse Gaussian Processes using Pseudo-inputs , 2005, NIPS.
[19] Shane Legg,et al. Noisy Networks for Exploration , 2017, ICLR.
[20] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[21] Lihong Li,et al. An Empirical Evaluation of Thompson Sampling , 2011, NIPS.
[22] Marcin Andrychowicz,et al. Parameter Space Noise for Exploration , 2017, ICLR.
[23] Aki Vehtari,et al. Expectation Propagation as a Way of Life: A Framework for Bayesian Inference on Partitioned Data , 2014, J. Mach. Learn. Res..
[24] B. Efron,et al. The Jackknife: The Bootstrap and Other Resampling Plans. , 1983 .
[25] Rémi Munos,et al. Pure Exploration in Multi-armed Bandits Problems , 2009, ALT.
[26] David Tse,et al. Time-Sensitive Bandit Learning and Satisficing Thompson Sampling , 2017, ArXiv.
[27] Alessandro Lazaric,et al. Active Learning for Accurate Estimation of Linear Models , 2017, ICML.
[28] Ron Kohavi,et al. Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid , 1996, KDD.
[29] Geoffrey E. Hinton,et al. Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.
[30] James Hensman,et al. Scalable Variational Gaussian Process Classification , 2014, AISTATS.
[31] Thierry Bertin-Mahieux,et al. The Million Song Dataset , 2011, ISMIR.
[32] Tom Minka,et al. Expectation Propagation for approximate Bayesian inference , 2001, UAI.
[33] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[34] W. R. Thompson. ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES , 1933 .
[35] Yee Whye Teh,et al. Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex , 2013, NIPS.
[36] Michael I. Jordan,et al. Graphical Models, Exponential Families, and Variational Inference , 2008, Found. Trends Mach. Learn..
[37] Tom Schaul,et al. Prioritized Experience Replay , 2015, ICLR.
[38] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[39] Ole-Christoffer Granmo,et al. Solving two-armed Bernoulli bandit problems using a Bayesian learning automaton , 2010, Int. J. Intell. Comput. Cybern..
[40] Michalis K. Titsias,et al. Variational Learning of Inducing Variables in Sparse Gaussian Processes , 2009, AISTATS.
[41] Christopher K. I. Williams,et al. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) , 2005 .
[42] Shipra Agrawal,et al. Analysis of Thompson Sampling for the Multi-armed Bandit Problem , 2011, COLT.
[43] Alexis Boukouvalas,et al. GPflow: A Gaussian Process Library using TensorFlow , 2016, J. Mach. Learn. Res..
[44] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[45] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[46] Lawrence Carin,et al. Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks , 2015, AAAI.
[47] Prabhat,et al. Scalable Bayesian Optimization Using Deep Neural Networks , 2015, ICML.
[48] Tom Minka,et al. A family of algorithms for approximate Bayesian inference , 2001 .
[49] Yee Whye Teh,et al. Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.
[50] Ahn. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring , 2012 .