Towards Faster Planning with Continuous Resources in Stochastic Domains
暂无分享,去创建一个
[1] Zhengzhu Feng,et al. Dynamic Programming for Structured Continuous Markov Decision Problems , 2004, UAI.
[2] Edmund H. Durfee,et al. Stationary Deterministic Policies for Constrained MDPs with Multiple Rewards, Costs, and Discount Factors , 2005, IJCAI.
[3] Lihong Li,et al. Lazy Approximation for Solving Continuous Finite-Horizon MDPs , 2005, AAAI.
[4] David E. Smith,et al. Planning Under Continuous Time and Resource Uncertainty: A Challenge for AI , 2002, AIPS Workshop on Planning for Temporal Domains.
[5] Nicolas Meuleau,et al. Scaling Up Decision Theoretic Planning to Planetary Rover Problems , 2004 .
[6] E. Altman. Constrained Markov Decision Processes , 1999 .
[7] Milind Tambe,et al. A Fast Analytical Algorithm for Solving Markov Decision Processes with Real-Valued Resources , 2007, IJCAI.
[8] Ronen I. Brafman,et al. Planning with Continuous Resources in Stochastic Domains , 2005, IJCAI.
[9] Anne Lohrli. Chapman and Hall , 1985 .
[10] Daniel N. Nikovski,et al. Non-Linear Stochastic Control in Continuous State Spaces by Exact Integration in Bellman's Equations , 2003 .
[11] Makoto Yokoo,et al. Winning back the CUP for distributed POMDPs: planning over continuous belief spaces , 2006, AAMAS '06.
[12] Craig Boutilier,et al. Stochastic dynamic programming with factored representations , 2000, Artif. Intell..
[13] Milos Hauskrecht,et al. Linear Program Approximations for Factored Continuous-State Markov Decision Processes , 2003, NIPS.
[14] Michail G. Lagoudakis,et al. Least-Squares Policy Iteration , 2003, J. Mach. Learn. Res..
[15] Andrew Y. Ng,et al. Policy Search via Density Estimation , 1999, NIPS.
[16] Milos Hauskrecht,et al. Solving Factored MDPs with Continuous and Discrete Variables , 2004, UAI.