Decision-Theoretic Planning with non-Markovian Rewards

A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decision-theoretic planning, where many desirable behaviours are more naturally expressed as properties of execution sequences rather than as properties of states, NMRDPs form a more natural model than the commonly adopted fully Markovian decision process (MDP) model. While the more tractable solution methods developed for MDPs do not directly apply in the presence of non-Markovian rewards, a number of solution methods for NMRDPs have been proposed in the literature. These all exploit a compact specification of the non-Markovian reward function in temporal logic, to automatically translate the NMRDP into an equivalent MDP which is solved using efficient MDP solution methods. This paper presents NMRDPP(Non-Markovian Reward Decision Process Planner), a software platform for the development and experimentation of methods for decision-theoretic planning with non-Markovian rewards. The current version of NMRDPP implements, under a single interface, a family of methods based on existing as well as new approaches which we describe in detail. These include dynamic programming, heuristic search, and structured methods. Using NMRDPP, we compare the methods and identify certain problem features that affect their performance. NMRDPP's treatment of non-Markovian rewards is inspired by the treatment of domain-specific search control knowledge in the TLPlan planner, which it incorporates as a special case. In the First International Probabilistic Planning Competition, NMRDPP was able to compete and perform well in both the domain-independent and hand-coded tracks, using search control knowledge in the latter.

[1]  Keiji Kanazawa,et al.  A model for reasoning about persistence and causation , 1989 .

[2]  Sylvie Thiébaux,et al.  Search Control in Planning for Temporally Extended Goals , 2005, ICAPS.

[3]  John K. Slaney,et al.  Anytime State-Based Solution Methods for Decision Processes with non-Markovian Rewards , 2002, UAI.

[4]  Jörg Hoffmann Local Search Topology in Planning Benchmarks: A Theoretical Analysis , 2002, PuK.

[5]  Zhengzhu Feng,et al.  Symbolic LAO* Search for Factored Markov Decision Processes , 2002, AAAI 2002.

[6]  Jan Chomicki,et al.  Efficient checking of temporal integrity constraints using bounded history encoding , 1995, TODS.

[7]  G. Grisetti,et al.  The RoboCare project, cognitive systems for the care of the elderly , 2003 .

[8]  Håkan L. S. Younes,et al.  Policy Generation for Continuous-time Stochastic Domains with Concurrency , 2004, ICAPS.

[9]  Pierre Wolper,et al.  On the Relation of Programs and Computations to Models of Temporal Logic , 1987, Temporal Logic in Specification.

[10]  Håkan L. S. Younes,et al.  PPDDL 1 . 0 : An Extension to PDDL for Expressing Planning Domains with Probabilistic Effects , 2004 .

[11]  Leslie Pack Kaelbling,et al.  Planning under Time Constraints in Stochastic Domains , 1993, Artif. Intell..

[12]  Craig Boutilier,et al.  Rewarding Behaviors , 1996, AAAI/IAAI, Vol. 2.

[13]  Fahiem Bacchus,et al.  Planning for temporally extended goals , 1996, Annals of Mathematics and Artificial Intelligence.

[14]  Andrew G. Barto,et al.  Learning to Act Using Real-Time Dynamic Programming , 1995, Artif. Intell..

[15]  Fahiem Bacchus,et al.  Using temporal logics to express search control knowledge for planning , 2000, Artif. Intell..

[16]  Zhengzhu Feng,et al.  Symbolic heuristic search for factored Markov decision processes , 2002, AAAI/IAAI.

[17]  Eldar Karabaev,et al.  A Heuristic Search Algorithm for Solving First-Order MDPs , 2005, UAI.

[18]  Richard E. Korf,et al.  Real-Time Heuristic Search , 1990, Artif. Intell..

[19]  L. Li,et al.  Engineering a Conformant Probabilistic Planner , 2011, J. Artif. Intell. Res..

[20]  E. Allen Emerson,et al.  Temporal and Modal Logic , 1991, Handbook of Theoretical Computer Science, Volume B: Formal Models and Sematics.

[21]  Moti Schneider,et al.  A stochastic model of actions and plans for anytime planning under uncertainty , 1995, Int. J. Intell. Syst..

[22]  J. Van Leeuwen,et al.  Handbook of theoretical computer science - Part A: Algorithms and complexity; Part B: Formal models and semantics , 1990 .

[23]  David Price,et al.  Implementation and Comparison of Solution Methods for Decision Processes with Non-Markovian Rewards , 2002, UAI.

[24]  Craig Boutilier,et al.  Stochastic dynamic programming with factored representations , 2000, Artif. Intell..

[25]  Blai Bonet,et al.  Faster Heuristic Search Algorithms for Planning with Uncertainty and Full Feedback , 2003, IJCAI.

[26]  Jana Koehler,et al.  Elevator Control as a Planning Problem , 2000, AIPS.

[27]  Robert Givan,et al.  Learning Domain-Specific Control Knowledge from Random Walks , 2004, ICAPS.

[28]  Chitta Baral,et al.  Goal Specification in Presence of Non-Deterministic Actions , 2004, ECAI.

[29]  Marco Pistore,et al.  Planning as Model Checking for Extended Goals in Non-deterministic Domains , 2001, IJCAI.

[30]  Håkan L. S. Younes,et al.  The First Probabilistic Track of the International Planning Competition , 2005, J. Artif. Intell. Res..

[31]  John K. Slaney,et al.  Semipositive LTL with an Uninterpreted Past Operator , 2005, Log. J. IGPL.

[32]  Craig Boutilier,et al.  Decision-Theoretic Planning: Structural Assumptions and Computational Leverage , 1999, J. Artif. Intell. Res..

[33]  Nicholas Kushmerick,et al.  An Algorithm for Probabilistic Planning , 1995, Artif. Intell..

[34]  Shlomo Zilberstein,et al.  LAO*: A heuristic search algorithm that finds solutions with loops , 2001, Artif. Intell..

[35]  J. Slaney,et al.  A Model-Checking Approach to Decision-Theoretic Planning with Non-Markovian Rewards , 2002 .

[36]  Bernhard Nebel,et al.  The FF Planning System: Fast Plan Generation Through Heuristic Search , 2011, J. Artif. Intell. Res..

[37]  Moshe Y. Vardi Automated Verification: Graphs, Logic, and Automata , 2003, IJCAI.

[38]  Peter Haddawy,et al.  Representations for Decision-Theoretic Planning: Utility Functions for Deadline Goals , 1992, KR.

[39]  Blai Bonet,et al.  Labeled RTDP: Improving the Convergence of Real-Time Dynamic Programming , 2003, ICAPS.

[40]  Shlomo Zilberstein,et al.  Symbolic Generalization for On-line Planning , 2002, UAI.

[41]  Blai Bonet,et al.  mGPT: A Probabilistic Planner Based on Heuristic Search , 2005, J. Artif. Intell. Res..

[42]  Diego Calvanese,et al.  Reasoning about Actions and Planning in LTL Action Theories , 2002, KR.

[43]  Craig Boutilier,et al.  Structured Solution Methods for Non-Markovian Decision Processes , 1997, AAAI/IAAI.

[44]  Jesse Hoey,et al.  SPUDD: Stochastic Planning using Decision Diagrams , 1999, UAI.

[45]  Mark Drummond,et al.  Situated Control Rules , 1989, KR.

[46]  Amir Pnueli,et al.  The Glory of the Past , 1985, Logic of Programs.

[47]  Sylvie Thiébaux,et al.  NMRDPP: A System for Decision-theoretic Planning with Non-Markovian Rewards , 2003 .

[48]  Ronald A. Howard,et al.  Dynamic Programming and Markov Processes , 1960 .

[49]  John K. Slaney,et al.  Blocks World revisited , 2001, Artif. Intell..

[50]  Marco Pistore,et al.  Planning with a language for extended goals , 2002, AAAI/IAAI.

[51]  M. Fourman Propositional Planning , 2000 .