Practical POMDPs for Personal Assistant Domains
暂无分享,去创建一个
Milind Tambe | Rajiv T. Maheswaran | Pradeep Varakantham | Milind Tambe | R. Maheswaran | Pradeep Varakantham
[1] Riccardo Bellazzi,et al. Using uncertainty management techniques in medical therapy planning: A decision-theoretic approach , 1998, Applications of Uncertainty Formalisms.
[2] Michael L. Littman,et al. Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes , 1997, UAI.
[3] Joelle Pineau,et al. Point-based value iteration: An anytime algorithm for POMDPs , 2003, IJCAI.
[4] Milos Hauskrecht,et al. Planning treatment of ischemic heart disease with partially observable Markov decision processes , 2000, Artif. Intell. Medicine.
[5] Milind Tambe,et al. Towards Adjustable Autonomy for the Real World , 2002, J. Artif. Intell. Res..
[6] Cungen Cao,et al. Modelling Medical Decisions in DynaMoL: A New General Framework of Dynamic Decision Analysis , 1998, MedInfo.
[7] William S. Lovejoy,et al. Computationally Feasible Bounds for Partially Observed Markov Decision Processes , 1991, Oper. Res..
[8] Jean Oh,et al. Electric Elves: Immersing an Agent Organization in a Human Organization , 2000 .
[9] Eric A. Hansen,et al. An Improved Grid-Based Approximation Algorithm for POMDPs , 2001, IJCAI.
[10] Martha E. Pollack,et al. Autominder: an intelligent cognitive orthotic system for people with memory impairment , 2003, Robotics Auton. Syst..
[11] Shlomo Zilberstein,et al. Region-Based Incremental Pruning for POMDPs , 2004, UAI.
[12] Nicholas Roy,et al. Exponential Family PCA for Belief Compression in POMDPs , 2002, NIPS.
[13] Milos Hauskrecht,et al. Value-Function Approximations for Partially Observable Markov Decision Processes , 2000, J. Artif. Intell. Res..
[14] Weihong Zhang,et al. Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes , 2011, J. Artif. Intell. Res..