Learning Macro-Actions for Arbitrary Planners and Domains

Many complex domains and even larger problems in simple domains remain challenging in spite of the recent progress in planning. Besides developing and improving planning technologies, re-engineering a domain by utilising acquired knowledge opens up a potential avenue for further research. Moreover, macro-actions, when added to the domain as additional actions, provide a promising means by which to convey such knowledge. A macro-action, or macro in short, is a group of actions selected for application as a single choice. Most existing work on macros exploits properties explicitly specific to the planners or the domains. However, such properties are not likely to be common with arbitrary planners or domains. Therefore, a macro learning method that does not exploit any structural knowledge about planners or domains explicitly is of immense interest. This paper presents an offline macro learning method that works with arbitrarily chosen planners and domains. Given a planner, a domain, and a number of example problems, the learning method generates macros from plans of some of the given problems under the guidance of a genetic algorithm. It represents macros like regular actions, evaluates them individually by solving the remaining given problems, and suggests individual macros that are to be added to the domain permanently. Genetic algorithms are automatic learning methods that can capture inherent features of a system using no explicit knowledge about it. Our method thus does not strive to discover or utilise any structural properties specific to a planner or a domain.

[1]  Ion Muslea,et al.  A GENERAL-PURPOSE AI PLANNING SYSTEM BASED ON THE GENETIC PROGRAMMING PARADIGM , 1997 .

[2]  Laurent Siklóssy,et al.  The Role of Preprocessing in Problem Solving Systems , 1977, IJCAI.

[3]  Lee Spector,et al.  Genetic Programming and AI Planning Systems , 1994, AAAI.

[4]  J. Levine,et al.  GenPlan " : Combining Genetic Programming and Planning C , 2000 .

[5]  Andrew Coles,et al.  Marvin: macro-actions from reduced versions of the instance , 2004 .

[6]  Bernhard Nebel,et al.  The FF Planning System: Fast Plan Generation Through Heuristic Search , 2011, J. Artif. Intell. Res..

[7]  Pedro Isasi Viñuela,et al.  Learning to Solve Planning Problems Efficiently by Means of Genetic Programming , 2001, Evolutionary Computation.

[8]  Manuela M. Veloso,et al.  Lazy Incremental Learning of Control Knowledge for Efficiently Obtaining Quality Plans , 1997, Artificial Intelligence Review.

[9]  John Levine,et al.  26th Workshop of the UK PLANNING AND SCHEDULING Special Interest Group PLANSIG 2007 , 2007 .

[10]  Eloisa Vargiu,et al.  DHG: A System for Generating Macro-Operators from Static Domain Analysis , 2005, Artificial Intelligence and Applications.

[11]  Glenn A. Iba,et al.  A heuristic approach to the discovery of macro-operators , 2004, Machine Learning.

[12]  Eugene Fink,et al.  Integrating planning and learning: the PRODIGY architecture , 1995, J. Exp. Theor. Artif. Intell..

[13]  Richard E. Korf,et al.  Macro-Operators: A Weak Method for Learning , 1985, Artif. Intell..

[14]  Richard Fikes,et al.  Learning and Executing Generalized Robot Plans , 1993, Artif. Intell..

[15]  Jonathan Schaeffer,et al.  Macro-FF: Improving AI Planning with Automatically Learned Macro-Operators , 2005, J. Artif. Intell. Res..

[16]  Steven Minton,et al.  Selectively Generalizing Plans for Problem-Solving , 1985, IJCAI.

[17]  T. Hern Searching for Macro Operators with Automatically Generated Heuristics , 2022 .

[18]  John Levine,et al.  Learning Action Strategies for Planning Domains Using Genetic Programming , 2003, EvoWorkshops.