Efficient Dialogue Policy Learning with BBQ-Networks
暂无分享,去创建一个
[1] Thomas J. Walsh,et al. Knows what it knows: a framework for self-aware learning , 2008, ICML '08.
[2] Milica Gasic,et al. Gaussian Processes for Fast Policy Optimisation of POMDP-based Dialogue Managers , 2010, SIGDIAL Conference.
[3] David Vandyke,et al. A Network-based End-to-End Trainable Task-oriented Dialogue System , 2016, EACL.
[4] Peter Auer,et al. Near-optimal Regret Bounds for Reinforcement Learning , 2008, J. Mach. Learn. Res..
[5] Sergey Levine,et al. Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models , 2015, ArXiv.
[6] Geoffrey E. Hinton,et al. Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.
[7] Sharad Vikram,et al. Capturing Meaning in Product Reviews with Character-Level Generative Text Models , 2015, ArXiv.
[8] David Silver,et al. Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.
[9] Malcolm J. A. Strens,et al. A Bayesian Framework for Reinforcement Learning , 2000, ICML.
[10] Benjamin Van Roy,et al. Deep Exploration via Bootstrapped DQN , 2016, NIPS.
[11] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[12] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[13] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[14] Lihong Li,et al. A Bayesian Sampling Approach to Exploration in Reinforcement Learning , 2009, UAI.
[15] Jing He,et al. Policy Networks with Two-Stage Training for Dialogue Systems , 2016, SIGDIAL Conference.
[16] Dongho Kim,et al. Incremental on-line adaptation of POMDP-based dialogue managers to extended domains , 2014, INTERSPEECH.
[17] Sham M. Kakade,et al. On the sample complexity of reinforcement learning. , 2003 .
[18] Tom Schaul,et al. Unifying Count-Based Exploration and Intrinsic Motivation , 2016, NIPS.
[19] Benjamin Van Roy,et al. (More) Efficient Reinforcement Learning via Posterior Sampling , 2013, NIPS.
[20] Marilyn A. Walker,et al. Reinforcement Learning for Spoken Dialogue Systems , 1999, NIPS.
[21] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[22] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[23] Steve J. Young,et al. Characterizing task-oriented dialog using a simulated ASR chanel , 2004, INTERSPEECH.
[24] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[25] Lihong Li,et al. An Empirical Evaluation of Thompson Sampling , 2011, NIPS.
[26] Shie Mannor,et al. Reinforcement learning with Gaussian processes , 2005, ICML.
[27] Tom Schaul,et al. Prioritized Experience Replay , 2015, ICLR.
[28] Roberto Pieraccini,et al. Learning dialogue strategies within the Markov decision process framework , 1997, 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings.
[29] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[30] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[31] Longxin Lin. Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching , 2004, Machine Learning.
[32] W. R. Thompson. ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES , 1933 .
[33] Filip De Turck,et al. Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks , 2016, ArXiv.
[34] Steve Young,et al. Statistical User Simulation with a Hidden Agenda , 2007, SIGDIAL.