Learning Classical Planning Strategies with Policy Gradient

A common paradigm in classical planning is heuristic forward search. Forward search planners often rely on simple best-first search which remains fixed throughout the search process. In this paper, we introduce a novel search framework capable of alternating between several forward search approaches while solving a particular planning problem. Selection of the approach is performed using a trainable stochastic policy, mapping the state of the search to a probability distribution over the approaches. This enables using policy gradient to learn search strategies tailored to a specific distributions of planning problems and a selected performance metric, e.g. the IPC score. We instantiate the framework by constructing a policy space consisting of five search approaches and a two-dimensional representation of the planner's state. Then, we train the system on randomly generated problems from five IPC domains using three different performance metrics. Our experimental results show that the learner is able to discover domain-specific search strategies, improving the planner's performance relative to the baselines of plain best-first search and a uniform policy.

[1]  Adele E. Howe,et al.  Exploiting Competitive Planner Performance , 1999, ECP.

[2]  R. J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[3]  Bernhard Nebel,et al.  The FF Planning System: Fast Plan Generation Through Heuristic Search , 2011, J. Artif. Intell. Res..

[4]  Hector Geffner,et al.  Width and Serialization of Classical Planning Problems , 2012, ECAI.

[5]  Richard Fikes,et al.  Learning and Executing Generalized Robot Plans , 1993, Artif. Intell..

[6]  Ronald P. A. Petrick,et al.  Learning heuristic functions for cost-based planning , 2013 .

[7]  Malte Helmert,et al.  The Fast Downward Planning System , 2006, J. Artif. Intell. Res..

[8]  Robert C. Holte,et al.  Adding Local Exploration to Greedy Best-First Search in Satisficing Planning , 2014, AAAI.

[9]  Robert Givan,et al.  Learning Control Knowledge for Forward Search Planning , 2008, J. Mach. Learn. Res..

[10]  Silvia Richter,et al.  The LAMA Planner: Guiding Cost-Based Anytime Planning with Landmarks , 2010, J. Artif. Intell. Res..

[11]  Fernando Fernández,et al.  The IBaCoP Planning System: Instance-Based Configured Portfolios , 2016, J. Artif. Intell. Res..

[12]  M. Helmert,et al.  Fast Downward Stone Soup : A Baseline for Building Planner Portfolios , 2011 .

[13]  Alfonso Gerevini,et al.  An Automatically Configurable Portfolio-based Planner with Macro-actions: PbP , 2009, ICAPS.

[14]  Nathan R. Sturtevant,et al.  A Comparison of Knowledge-Based GBFS Enhancements and Knowledge-Free Exploration , 2014, ICAPS.

[15]  Leslie Pack Kaelbling,et al.  Learning to Rank for Synthesizing Planning Heuristics , 2016, IJCAI.

[16]  Ingrid Zukerman,et al.  Inductive Learning of Search Control Rules for Planning , 1998, Artif. Intell..

[17]  Hector Geffner,et al.  Best-First Width Search: Exploration and Exploitation in Classical Planning , 2017, AAAI.

[18]  Sergio Jiménez Celorrio,et al.  A review of machine learning for automated planning , 2012, The Knowledge Engineering Review.

[19]  Andrew Coles,et al.  Marvin: A Heuristic Search Planner with Online Macro-Action Learning , 2011, J. Artif. Intell. Res..