暂无分享,去创建一个
Razvan Pascanu | Misha Denil | Raia Hadsell | Fabio Viola | Koray Kavukcuoglu | Piotr W. Mirowski | Laurent Sifre | Dharshan Kumaran | Andrea Banino | Hubert Soyer | Andy Ballard | Ross Goroshin | L. Sifre | K. Kavukcuoglu | R. Hadsell | D. Kumaran | Razvan Pascanu | Hubert Soyer | P. Mirowski | Misha Denil | Andrea Banino | Fabio Viola | Ross Goroshin | Andy Ballard | Piotr Wojciech Mirowski
[1] G. Handelmann,et al. Hippocampus, space, and memory , 1979 .
[2] S. C. Suddarth,et al. Rule-Injection Hints as a Means of Improving Network Performance and Learning Time , 1990, EURASIP Workshop.
[3] Doina Precup,et al. Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..
[4] Hugh F. Durrant-Whyte,et al. A solution to the simultaneous localization and map building (SLAM) problem , 2001, IEEE Trans. Robotics Autom..
[5] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[6] Marc'Aurelio Ranzato,et al. Dynamic auto-encoders for semantic indexing , 2010 .
[7] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[8] Jürgen Schmidhuber,et al. Evolving large-scale neural networks for vision-based reinforcement learning , 2013, GECCO '13.
[9] Rob Fergus,et al. Depth Map Prediction from a Single Image using a Multi-Scale Deep Network , 2014, NIPS.
[10] Razvan Pascanu,et al. How to Construct Deep Recurrent Neural Networks , 2013, ICLR.
[11] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[12] Jianfeng Gao,et al. Recurrent Reinforcement Learning: A Hybrid Approach , 2015, ArXiv.
[13] Yann LeCun,et al. Stacked What-Where Auto-encoders , 2015, ArXiv.
[14] Regina Barzilay,et al. Language Understanding for Text-based Games using Deep Reinforcement Learning , 2015, EMNLP.
[15] Shane Legg,et al. Massively Parallel Methods for Deep Reinforcement Learning , 2015, ArXiv.
[16] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[17] Peter Stone,et al. Deep Recurrent Q-Learning for Partially Observable MDPs , 2015, AAAI Fall Symposia.
[18] Ming Liu,et al. Towards Cognitive Exploration through Deep Reinforcement Learning for Mobile Robots , 2016, ArXiv.
[19] Samuel Gershman,et al. Deep Successor Reinforcement Learning , 2016, ArXiv.
[20] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[21] T. Barron,et al. Deep Reinforcement Learning in a 3-D Blockworld Environment , 2016 .
[22] Honglak Lee,et al. Control of Memory, Active Perception, and Action in Minecraft , 2016, ICML.
[23] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[24] Sergio Gomez Colmenarejo,et al. Hybrid computing using a neural network with dynamic external memory , 2016, Nature.
[25] Kibok Lee,et al. Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification , 2016, ICML.
[26] Xinyun Chen. Under Review as a Conference Paper at Iclr 2017 Delving into Transferable Adversarial Ex- Amples and Black-box Attacks , 2016 .
[27] Shane Legg,et al. DeepMind Lab , 2016, ArXiv.
[28] Shie Mannor,et al. A Deep Hierarchical Approach to Lifelong Learning in Minecraft , 2016, AAAI.
[29] Ali Farhadi,et al. Target-driven visual navigation in indoor scenes using deep reinforcement learning , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).
[30] Tom Schaul,et al. Reinforcement Learning with Unsupervised Auxiliary Tasks , 2016, ICLR.
[31] Guillaume Lample,et al. Playing FPS Games with Deep Reinforcement Learning , 2016, AAAI.