Using Imagery to Simplify Perceptual Abstraction in Reinforcement Learning Agents
暂无分享,去创建一个
[1] Kenneth D. Forbus,et al. Qualitative Spatial Reasoning: The Clock Project , 1991, Artif. Intell..
[2] B. Habibi,et al. Pengi : An Implementation of A Theory of Activity , 1998 .
[3] Richard S. Sutton,et al. Introduction to Reinforcement Learning , 1998 .
[4] Balaraman Ravindran,et al. Model Minimization in Hierarchical Reinforcement Learning , 2002, SARA.
[5] Robert Givan,et al. Equivalence notions and model minimization in Markov decision processes , 2003, Artif. Intell..
[6] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[7] Thomas J. Walsh,et al. Towards a Unified Theory of State Abstraction for MDPs , 2006, AI&M.
[8] John E. Laird,et al. Bimodal Spatial Reasoning with Continuous Motion , 2008, AAAI.
[9] Andre Cohen,et al. An object-oriented representation for efficient reinforcement learning , 2008, ICML '08.
[10] John E. Laird,et al. Extending the Soar Cognitive Architecture , 2008, AGI.
[11] B. Kuipers,et al. From pixels to policies: A bootstrapping agent , 2008, 2008 7th IEEE International Conference on Development and Learning.
[12] Samuel Wintermute. Integrating Action and Reasoning through Simulation , 2009 .
[13] John E. Laird,et al. Imagery as compensation for an imperfect abstract problem representation , 2009 .
[14] Samuel Wintermute. An Overview of Spatial Processing in Soar/SVS , 2009 .