暂无分享,去创建一个
Christian S. Perone | Thomas S. Paula | Thomas Paula | Roberto Pereira Silveira | C. Perone | Roberto Silveira
[1] Wray L. Buntine,et al. Bayesian Back-Propagation , 1991, Complex Syst..
[2] Roger B. Grosse,et al. Optimizing Neural Networks with Kronecker-factored Approximate Curvature , 2015, ICML.
[3] S. Levine,et al. Robust Imitative Planning: Planning from Demonstrations Under Uncertainty , 2019 .
[4] Zoubin Ghahramani,et al. Deep Bayesian Active Learning with Image Data , 2017, ICML.
[5] Albin Cassirer,et al. Randomized Prior Functions for Deep Reinforcement Learning , 2018, NeurIPS.
[6] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[7] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[8] David Barber,et al. Practical Gauss-Newton Optimisation for Deep Learning , 2017, ICML.
[9] Andrew Gordon Wilson,et al. A Simple Baseline for Bayesian Uncertainty in Deep Learning , 2019, NeurIPS.
[10] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[11] Roger B. Grosse,et al. A Kronecker-factored approximate Fisher matrix for convolution layers , 2016, ICML.
[12] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[13] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[14] Ross D. Shachter,et al. Laplace's Method Approximations for Probabilistic Inference in Belief Networks with Continuous Variables , 1994, UAI.
[15] Adam D. Cobb,et al. Scaling Hamiltonian Monte Carlo Inference for Bayesian Neural Networks with Symmetric Splitting , 2020, UAI.
[16] David Barber,et al. A Scalable Laplace Approximation for Neural Networks , 2018, ICLR.
[17] Andrew Gordon Wilson,et al. Averaging Weights Leads to Wider Optima and Better Generalization , 2018, UAI.
[18] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.
[19] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[20] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[21] Nicolas Le Roux,et al. On the interplay between noise and curvature and its effect on optimization and generalization , 2019, AISTATS.
[22] David J. C. MacKay,et al. A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.
[23] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[24] Frederik Kunstner,et al. Limitations of the empirical Fisher approximation for natural gradient descent , 2019, NeurIPS.
[25] Agustinus Kristiadi,et al. Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks , 2020, ICML.
[26] Shun-ichi Amari,et al. Natural Gradient Works Efficiently in Learning , 1998, Neural Computation.
[27] Jaehoon Lee,et al. On Empirical Comparisons of Optimizers for Deep Learning , 2019, ArXiv.