Training Deep Reactive Policies for Probabilistic Planning Problems

State-of-the-art probabilistic planners typically apply lookahead search and reasoning at each step to make a decision. While this approach can enable high-quality decisions, it can be computationally expensive for problems that require fast decision making. In this paper, we investigate the potential for deep learning to replace search by fast reactive policies. We focus on supervised learning of deep reactive policies for probabilistic planning problems described in RDDL. A key challenge is to explore the large design space of network architectures and training methods, which was critical to prior deep learning successes. We investigate a number of choices in this space and conduct experiments across a set of benchmark problems. Our results show that effective deep reactive policies can be learned for many benchmark problems and that leveraging the planning problem description to define the network structure can be beneficial.

[1]  Alan Fern,et al.  Planning in Factored Action Spaces with Symbolic Dynamic Programming , 2012, AAAI.

[2]  Alan Fern,et al.  Memory-Effcient Symbolic Online Planning for Factored MDPs , 2015, UAI.

[3]  Parag Singla,et al.  OGA-UCT: On-the-Go Abstractions in UCT , 2016, ICAPS.

[4]  Oren Etzioni,et al.  Explanation-Based Learning: A Problem Solving Perspective , 1989, Artif. Intell..

[5]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[6]  Robert Givan,et al.  Inductive Policy Selection for First-Order MDPs , 2002, UAI.

[7]  Alan Fern,et al.  Factored MCTS for Large Scale Stochastic Planning , 2015, AAAI.

[8]  Lexing Xie,et al.  Action Schema Networks: Generalised Policies with Deep Learning , 2017, AAAI.

[9]  Sergio Jiménez Celorrio,et al.  A review of machine learning for automated planning , 2012, The Knowledge Engineering Review.

[10]  Thomas Keller,et al.  PROST: Probabilistic Planning Based on UCT , 2012, ICAPS.

[11]  J. Andrew Bagnell,et al.  Efficient Reductions for Imitation Learning , 2010, AISTATS.

[12]  Sergey Levine,et al.  Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection , 2016, Int. J. Robotics Res..

[13]  Steven Minton,et al.  Machine Learning Methods for Planning , 1994 .

[14]  Blai Bonet,et al.  Action Selection for MDPs: Anytime AO* Versus UCT , 2012, AAAI.

[15]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[16]  Jesse Hoey,et al.  SPUDD: Stochastic Planning using Decision Diagrams , 1999, UAI.

[17]  Sergio Jiménez Celorrio,et al.  Learning Relational Decision Trees for Guiding Heuristic Planning , 2008, ICAPS.

[18]  Ilse C. F. Ipsen,et al.  THE IDEA BEHIND KRYLOV METHODS , 1998 .

[19]  Marek Petrik,et al.  An Analysis of Laplacian Methods for Value Function Approximation in MDPs , 2007, IJCAI.

[20]  Alan Fern,et al.  Symbolic Opportunistic Policy Iteration for Factored-Action MDPs , 2013, NIPS.

[21]  Terry L. Zimmerman,et al.  Learning-Assisted Automated Planning: Looking Back, Taking Stock, Going Forward , 2003, AI Mag..

[22]  Peng Dai,et al.  Reverse Iterative Deepening for Finite-Horizon MDPs with Large Branching Factors , 2012, ICAPS.

[23]  Alan Fern,et al.  Hindsight Optimization for Probabilistic Planning with Factored Actions , 2015, ICAPS.

[24]  Robert Givan,et al.  Approximate Policy Iteration with a Policy Language Bias , 2003, NIPS.

[25]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[26]  David Barber,et al.  Thinking Fast and Slow with Deep Learning and Tree Search , 2017, NIPS.

[27]  Roni Khardon,et al.  Learning Action Strategies for Planning Domains , 1999, Artif. Intell..

[28]  Gerald Tesauro,et al.  On-line Policy Improvement using Monte-Carlo Search , 1996, NIPS.