Exploiting Additive Structure in Factored MDPs for Reinforcement Learning
暂无分享,去创建一个
[1] Douglas H. Fisher,et al. A Case Study of Incremental Concept Induction , 1986, AAAI.
[2] Milos Hauskrecht,et al. Learning Basis Functions in Hybrid Domains , 2006, AAAI.
[3] Craig Boutilier,et al. Stochastic dynamic programming with factored representations , 2000, Artif. Intell..
[4] Craig Boutilier,et al. The Frame Problem and Bayesian Network Action Representation , 1996, Canadian Conference on AI.
[5] Keiji Kanazawa,et al. A model for reasoning about persistence and causation , 1989 .
[6] Shobha Venkataraman,et al. Efficient Solution Algorithms for Factored MDPs , 2003, J. Artif. Intell. Res..
[7] Nevin Lianwen Zhang,et al. On the Role of Context-Specific Independence in Probabilistic Inference , 1999, IJCAI.
[8] Jesse Hoey,et al. SPUDD: Stochastic Planning using Decision Diagrams , 1999, UAI.
[9] Daphne Koller,et al. Computing Factored Value Functions for Policies in Structured MDPs , 1999, IJCAI.
[10] Olivier Sigaud,et al. Learning the structure of Factored Markov Decision Processes in reinforcement learning problems , 2006, ICML.
[11] P. Schweitzer,et al. Generalized polynomial approximations in Markovian decision processes , 1985 .
[12] Craig Boutilier,et al. Exploiting Structure in Policy Construction , 1995, IJCAI.
[13] A. S. Manne. Linear Programming and Sequential Decisions , 1960 .
[14] Olivier Sigaud,et al. Chi-square Tests Driven Method for Learning the Structure of Factored MDPs , 2006, UAI.