Size Independent Neural Transfer for RDDL Planning

Neural planners for RDDL MDPs produce deep reactive policies in an offline fashion. These scale well with large domains, but are sample inefficient and time-consuming to train from scratch for each new problem. To mitigate this, recent work has studied neural transfer learning, so that a generic planner trained on other problems of the same domain can rapidly transfer to a new problem. However, this approach only transfers across problems of the same size. We present the first method for neural transfer of RDDL MDPs that can transfer across problems of different sizes. Our architecture has two key innovations to achieve size independence: (1) a state encoder, which outputs a fixed length state embedding by max pooling over varying number of object embeddings, (2) a single parameter-tied action decoder that projects object embeddings into action probabilities for the final policy. On the two challenging RDDL domains of SysAdmin and Game Of Life, our approach powerfully transfers across problem sizes and has superior learning curves over training from scratch.

[1]  Håkan L. S. Younes,et al.  The First Probabilistic Track of the International Planning Competition , 2005, J. Artif. Intell. Res..

[2]  Ye Zhang,et al.  A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification , 2015, IJCNLP.

[3]  Pieter Abbeel,et al.  Value Iteration Networks , 2016, NIPS.

[4]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[5]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[6]  S. Sanner,et al.  The Academic Advising Planning Domain , 2012 .

[7]  Carlos Guestrin,et al.  Generalizing plans to new environments in relational MDPs , 2003, IJCAI 2003.

[8]  Mausam,et al.  Transfer of Deep Reactive Policies for MDP Planning , 2018, NeurIPS.

[9]  Lexing Xie,et al.  Action Schema Networks: Generalised Policies with Deep Learning , 2017, AAAI.

[10]  Alan Fern,et al.  Training Deep Reactive Policies for Probabilistic Planning Problems , 2018, ICAPS.

[11]  Carlos Guestrin,et al.  Max-norm Projections for Factored MDPs , 2001, IJCAI.

[13]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[14]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[15]  Danna Zhou,et al.  d. , 1934, Microbial pathogenesis.

[16]  Thomas Keller,et al.  PROST: Probabilistic Planning Based on UCT , 2012, ICAPS.

[17]  Craig Boutilier,et al.  Symbolic Dynamic Programming for First-Order MDPs , 2001, IJCAI.

[18]  Scott Sanner,et al.  Approximate Linear Programming for First-order MDPs , 2005, UAI.

[19]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[20]  Pieter Abbeel,et al.  Learning Generalized Reactive Policies using Deep Neural Networks , 2017, ICAPS.

[21]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[22]  Mausam,et al.  A Theory of Goal-Oriented MDPs with Dead Ends , 2012, UAI.