Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling
暂无分享,去创建一个
Kyungjae Lee | Songhwai Oh | Sungjoon Choi | Sungbin Lim | Songhwai Oh | Sungjoon Choi | Sungbin Lim | Kyungjae Lee
[1] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[2] Stephane Ross,et al. Interactive Learning for Sequential Decisions and Predictions , 2013 .
[3] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[4] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[5] Axel Brando Guillaumes,et al. Mixture density networks for distribution and uncertainty estimation , 2017 .
[6] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Charles Richter,et al. Safe Visual Navigation via Deep Learning and Novelty Detection , 2017, Robotics: Science and Systems.
[8] Sergey Levine,et al. Uncertainty-Aware Reinforcement Learning for Collision Avoidance , 2017, ArXiv.
[9] Yarin Gal,et al. Uncertainty in Deep Learning , 2016 .
[10] Jason Weston,et al. A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.
[11] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[12] Sergey Levine,et al. Trust Region Policy Optimization , 2015, ICML.
[13] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[14] Kyungjae Lee,et al. Density Matching Reward Learning , 2016, ArXiv.
[15] Geoffrey J. McLachlan,et al. Mixture models : inference and applications to clustering , 1989 .
[16] C. Bishop. Mixture density networks , 1994 .
[17] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[18] Geoffrey E. Hinton,et al. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer , 2017, ICLR.