暂无分享,去创建一个
Alexei A. Efros | Trevor Darrell | Deepak Pathak | Amos J. Storkey | Harrison Edwards | Yuri Burda | Trevor Darrell | A. Storkey | Deepak Pathak | Yuri Burda | Harrison Edwards
[1] Jürgen Schmidhuber,et al. A possibility for implementing curiosity and boredom in model-building neural controllers , 1991 .
[2] Jürgen Schmidhuber,et al. Curious model-building control systems , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.
[3] E. Deci,et al. Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. , 2000, Contemporary educational psychology.
[4] Nuttapong Chentanez,et al. Intrinsically Motivated Reinforcement Learning , 2004, NIPS.
[5] Robert Zubek,et al. MDA : A Formal Approach to Game Design and Game Research , 2004 .
[6] Michael Gasser,et al. The Development of Embodied Cognition: Six Lessons from Babies , 2005, Artificial Life.
[7] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[8] Jesse Hoey,et al. An analytic solution to discrete Bayesian reinforcement learning , 2006, ICML.
[9] Pierre-Yves Oudeyer,et al. What is Intrinsic Motivation? A Typology of Computational Approaches , 2007, Frontiers Neurorobotics.
[10] Kenneth O. Stanley,et al. Exploiting Open-Endedness to Solve Problems Through the Search for Novelty , 2008, ALIFE.
[11] Yann LeCun,et al. What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.
[12] Jürgen Schmidhuber,et al. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010) , 2010, IEEE Transactions on Autonomous Mental Development.
[13] Jürgen Schmidhuber,et al. Formal Theory of Fun and Creativity , 2010, ECML/PKDD.
[14] Kenneth O. Stanley,et al. Abandoning Objectives: Evolution Through the Search for Novelty Alone , 2011, Evolutionary Computation.
[15] Zhenghao Chen,et al. On Random Weights and Unsupervised Feature Learning , 2011, ICML.
[16] Herre van Oostendorp,et al. The role of Game Discourse Analysis and curiosity in creating engaging and effective serious games by implementing a back story and foreshadowing , 2011, Interact. Comput..
[17] Doina Precup,et al. An information-theoretic approach to curiosity-driven reinforcement learning , 2012, Theory in Biosciences.
[18] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[19] Pierre-Yves Oudeyer,et al. Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress , 2012, NIPS.
[20] G. Costikyan,et al. Uncertainty in Games , 2013 .
[21] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[22] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[23] Sergey Levine,et al. Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models , 2015, ArXiv.
[24] Le Song,et al. Deep Fried Convnets , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[25] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[26] Shakir Mohamed,et al. Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning , 2015, NIPS.
[27] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[28] Marc G. Bellemare,et al. The Arcade Learning Environment: An Evaluation Platform for General Agents (Extended Abstract) , 2012, IJCAI.
[29] Filip De Turck,et al. VIME: Variational Information Maximizing Exploration , 2016, NIPS.
[30] Benjamin Van Roy,et al. Deep Exploration via Bootstrapped DQN , 2016, NIPS.
[31] Tom Schaul,et al. Unifying Count-Based Exploration and Intrinsic Motivation , 2016, NIPS.
[32] H. P. de Vladar. Why Greatness Cannot Be Planned: The Myth of the Objective , 2016, Leonardo.
[33] Filip De Turck,et al. #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning , 2016, NIPS.
[34] Marc G. Bellemare,et al. Count-Based Exploration with Neural Density Models , 2017, ICML.
[35] Alexei A. Efros,et al. Curiosity-Driven Exploration by Self-Supervised Prediction , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[36] Justin Fu,et al. EX2: Exploration with Exemplar Models for Deep Reinforcement Learning , 2017, NIPS.
[37] Daan Wierstra,et al. Variational Intrinsic Control , 2016, ICLR.
[38] Pieter Abbeel,et al. UCB and InfoGain Exploration via $\boldsymbol{Q}$-Ensembles , 2017, ArXiv.
[39] S. Shankar Sastry,et al. Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning , 2017, ArXiv.
[40] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[41] Pierre-Yves Oudeyer,et al. Computational Theories of Curiosity-Driven Learning , 2018, ArXiv.
[42] Marcin Andrychowicz,et al. Parameter Space Noise for Exploration , 2017, ICLR.
[43] Richard Y. Chen,et al. UCB EXPLORATION VIA Q-ENSEMBLES , 2018 .
[44] Shane Legg,et al. Noisy Networks for Exploration , 2017, ICLR.
[45] Ilya Kostrikov,et al. Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play , 2017, ICLR.
[46] Jitendra Malik,et al. Zero-Shot Visual Imitation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[47] Sergey Levine,et al. Diversity is All You Need: Learning Skills without a Reward Function , 2018, ICLR.