Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies

We introduce a new RL problem where the agent is required to generalize to a previously-unseen environment characterized by a subtask graph which describes a set of subtasks and their dependencies. Unlike existing hierarchical multitask RL approaches that explicitly describe what the agent should do at a high level, our problem only describes properties of subtasks and relationships among them, which requires the agent to perform complex reasoning to find the optimal subtask to execute. To solve this problem, we propose a neural subtask graph solver (NSGS) which encodes the subtask graph using a recursive neural network embedding. To overcome the difficulty of training, we propose a novel non-parametric gradient-based policy, graph reward propagation, to pre-train our NSGS agent and further finetune it through actor-critic method. The experimental results on two 2D visual domains show that our agent can perform complex reasoning to find a near-optimal way of executing the subtask graph and generalize well to the unseen subtask graphs. In addition, we compare our agent with a Monte-Carlo tree search (MCTS) method showing that our method is much more efficient than MCTS, and the performance of NSGS can be further improved by combining it with MCTS.

[1]  Ruslan Salakhutdinov,et al.  Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning , 2015, ICLR.

[2]  Brian Scassellati,et al.  Autonomously constructing hierarchical task networks for planning and human-robot collaboration , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[3]  James A. Hendler,et al.  UMCP: A Sound and Complete Procedure for Hierarchical Task-network Planning , 1994, AIPS.

[4]  Kutluhan Erol,et al.  Hierarchical task network planning: formalization, analysis, and implementation , 1996 .

[5]  Lihong Li,et al.  Neuro-Symbolic Program Synthesis , 2016, ICLR.

[6]  David Andre,et al.  State abstraction for programmable reinforcement learning agents , 2002, AAAI/IAAI.

[7]  B. Faverjon,et al.  A local based approach for path planning of manipulators with a high number of degrees of freedom , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[8]  Doina Precup,et al.  Learning Options in Reinforcement Learning , 2002, SARA.

[9]  Doina Precup,et al.  Temporal abstraction in reinforcement learning , 2000, ICML 2000.

[10]  Leonidas J. Guibas,et al.  Visibility-polygon search and euclidean shortest paths , 1985, 26th Annual Symposium on Foundations of Computer Science (sfcs 1985).

[11]  David Andre,et al.  Programmable Reinforcement Learning Agents , 2000, NIPS.

[12]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.

[13]  John F. Canny,et al.  A Voronoi method for the piano-movers problem , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[14]  Sridhar Mahadevan,et al.  Hierarchical Policy Gradient Algorithms , 2003, ICML.

[15]  Earl D. Sacerdoti,et al.  The Nonlinear Nature of Plans , 1975, IJCAI.

[16]  Matthew E. Taylor,et al.  Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning , 2017, ArXiv.

[17]  Bruno Castro da Silva,et al.  Learning Parameterized Skills , 2012, ICML.

[18]  Dan Klein,et al.  Modular Multitask Reinforcement Learning with Policy Sketches , 2016, ICML.

[19]  Stuart J. Russell,et al.  Reinforcement Learning with Hierarchies of Machines , 1997, NIPS.

[20]  Honglak Lee,et al.  Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning , 2017, ICML.

[21]  Andrew G. Barto,et al.  Building Portable Options: Skill Transfer in Reinforcement Learning , 2007, IJCAI.

[22]  John F. Canny,et al.  A new algebraic method for robot motion planning and real geometry , 1987, 28th Annual Symposium on Foundations of Computer Science (sfcs 1987).

[23]  Rob Fergus,et al.  MazeBase: A Sandbox for Learning from Games , 2015, ArXiv.

[24]  Thomas G. Dietterich Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition , 1999, J. Artif. Intell. Res..

[25]  Doina Precup,et al.  Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..

[26]  Wei Xu,et al.  A Deep Compositional Framework for Human-like Language Acquisition in Virtual Environment , 2017, ArXiv.

[27]  J. Sack,et al.  Minimum Decompositions of Polygonal Objects , 1985 .

[28]  Sergey Levine,et al.  High-Dimensional Continuous Control Using Generalized Advantage Estimation , 2015, ICLR.

[29]  Juan Fernández-Olivares,et al.  Temporal Enhancements of an HTN Planner , 2005, CAEPIA.

[30]  Silvio Savarese,et al.  Neural Task Graphs: Generalizing to Unseen Tasks From a Single Video Demonstration , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Hector Muñoz-Avila,et al.  SHOP: Simple Hierarchical Ordered Planner , 1999, IJCAI.

[32]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.