Planning with POMDPs using a compact, logic-based representation
暂无分享,去创建一个
[1] Blai Bonet. High-Level Planning and Control with Incomplete Information Using POMDPs Hdctor Geffner and , 2003 .
[2] Hector Geffner. Functional strips: a more flexible language for planning and problem solving , 2000 .
[3] Anne Condon,et al. On the undecidability of probabilistic planning and related stochastic optimization problems , 2003, Artif. Intell..
[4] Yuguang Fang. A theorem on the k-adic representation of positive integers , 2001 .
[5] Milos Hauskrecht,et al. Value-Function Approximations for Partially Observable Markov Decision Processes , 2000, J. Artif. Intell. Res..
[6] Leslie Pack Kaelbling,et al. Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..
[7] Jim Blythe,et al. Decision-Theoretic Planning , 1999, AI Mag..
[8] Jim Blythe,et al. Planning Under Uncertainty in Dynamic Domains , 1998 .
[9] Jesse Hoey,et al. SPUDD: Stochastic Planning using Decision Diagrams , 1999, UAI.
[10] Craig Boutilier,et al. Value-Directed Belief State Approximation for POMDPs , 2000, UAI.
[11] Tze-Yun Leong,et al. Multiple Perspective Dynamic Decision Making , 1998, Artif. Intell..
[12] Richard Washington,et al. BI-POMDP: Bounded, Incremental, Partially-Observable Markov-Model Planning , 1997, ECP.
[13] Jesse Hoey,et al. APRICODD: Approximate Policy Construction Using Decision Diagrams , 2000, NIPS.
[14] Craig Boutilier,et al. Computing Optimal Policies for Partially Observable Decision Processes Using Compact Representations , 1996, AAAI/IAAI, Vol. 2.
[15] Xavier Boyen,et al. Tractable Inference for Complex Stochastic Processes , 1998, UAI.
[16] Nils J. Nilsson,et al. Problem-solving methods in artificial intelligence , 1971, McGraw-Hill computer science series.
[17] Eric A. Hansen,et al. Solving POMDPs by Searching in Policy Space , 1998, UAI.
[18] Zhengzhu Feng,et al. Symbolic heuristic search for factored Markov decision processes , 2002, AAAI/IAAI.
[19] Daphne Koller,et al. Using Learning for Approximation in Stochastic Processes , 1998, ICML.
[20] Craig Boutilier,et al. Decision-Theoretic Planning: Structural Assumptions and Computational Leverage , 1999, J. Artif. Intell. Res..
[21] Craig Boutilier,et al. Symbolic Dynamic Programming for First-Order MDPs , 2001, IJCAI.
[22] Zhengzhu Feng,et al. Dynamic Programming for POMDPs Using a Factored State Representation , 2000, AIPS.
[23] David Poole,et al. The Independent Choice Logic for Modelling Multiple Agents Under Uncertainty , 1997, Artif. Intell..
[24] Daniel S. Weld,et al. Probabilistic Planning with Information Gathering and Contingent Execution , 1994, AIPS.
[25] Martha E. Pollack,et al. Contingency Selection in Plan Generation , 1997, ECP.
[26] Drew McDermott,et al. Planning and Acting , 1978, Cogn. Sci..
[27] Shlomo Zilberstein,et al. LAO*: A heuristic search algorithm that finds solutions with loops , 2001, Artif. Intell..
[28] Stuart J. Russell,et al. The BATmobile: Towards a Bayesian Automated Taxi , 1995, IJCAI.
[29] Piergiorgio Bertoli,et al. Planning in Nondeterministic Domains under Partial Observability via Symbolic Model Checking , 2001, IJCAI.