Online Action Recognition

Recognition in planning seeks to find agent intentions, goals or activities given a set of observations and a knowledge library (e.g. goal states, plans or domain theories). In this work we introduce the problem of Online Action Recognition. It consists in recognizing, in an open world, the planning action that best explains a partially observable state transition from a knowledge library of first-order STRIPS actions, which is initially empty. We frame this as an optimization problem, and propose two algorithms to address it: Action Unification (AU) and Online Action Recognition through Unification (OARU). The former builds on logic unification and generalizes two input actions using weighted partial MaxSAT. The latter looks for an action within the library that explains an observed transition. If there is such action, it generalizes it making use of AU, building in this way an AU hierarchy. Otherwise, OARU inserts a Trivial Grounded Action (TGA) in the library that explains just that transition. We report results on benchmarks from the International Planning Competition and PDDLGym, where OARU recognizes actions accurately with respect to expert knowledge, and shows real-time performance.

[1]  Hector Geffner,et al.  Plan Recognition as Planning , 2009, IJCAI.

[2]  Alex S. Fukunaga,et al.  Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary , 2017, AAAI.

[3]  Carlos Ansótegui,et al.  Solving (Weighted) Partial MaxSAT with ILP , 2013, CPAIOR.

[4]  G. S. Tseitin On the Complexity of Derivation in Propositional Calculus , 1983 .

[5]  Felipe Meneguzzi,et al.  Goal Recognition in Latent Space , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[6]  Mark Steedman,et al.  Learning STRIPS Operators from Noisy and Incomplete Observations , 2012, UAI.

[7]  Qiang Yang,et al.  Quantifying information and contradiction in propositional logic through test actions , 2009, IJCAI.

[8]  Stephen Cresswell,et al.  Generalised Domain Model Acquisition from Action Traces , 2011, ICAPS.

[9]  Jonathan Schaeffer,et al.  Sokoban: A Challenging Single-Agent Search Problem , 1997, IJCAI 1997.

[10]  Robert P. Goldman,et al.  Plan, Activity, and Intent Recognition: Theory and Practice , 2014 .

[11]  Leslie Pack Kaelbling,et al.  From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning , 2018, J. Artif. Intell. Res..

[12]  Javier Segovia Aguas,et al.  Unsupervised Classification of Planning Instances , 2017, ICAPS.

[13]  Eva Onaindia,et al.  Learning STRIPS Action Models with Classical Planning , 2018, ICAPS.

[14]  Anders Jonsson,et al.  Generalized Planning with Positive and Negative Examples , 2019, AAAI.

[15]  Eva Onaindia,et al.  Observation Decoding with Sensor Models: Recognition Tasks via Classical Planning , 2020, ICAPS.

[16]  Ankuj Arora,et al.  A review of learning planning action models , 2018, The Knowledge Engineering Review.

[17]  Scott Benson,et al.  Inductive Learning of Reactive Action Models , 1995, ICML.

[18]  Blai Bonet,et al.  Learning First-Order Symbolic Representations for Planning from the Structure of the State Space , 2020, ECAI.

[19]  Erez Karpas,et al.  Goal Recognition Design , 2014, ICAPS.

[20]  Stephen Cresswell,et al.  Domain Model Acquisition in the Presence of Static Relations in the LOP System , 2015, ICAPS.

[21]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[22]  Qiang Yang,et al.  Learning action models from plan examples using weighted MAX-SAT , 2007, Artif. Intell..

[23]  Patrik Haslum,et al.  An Introduction to the Planning Domain Definition Language , 2019, An Introduction to the Planning Domain Definition Language.

[24]  Carme Torras,et al.  STRIPS Action Discovery , 2020, ArXiv.

[25]  Richard M. Karp,et al.  Reducibility Among Combinatorial Problems , 1972, 50 Years of Integer Programming.

[26]  Javier Segovia Aguas,et al.  Generating Context-Free Grammars using Classical Planning , 2017, IJCAI.

[27]  Eva Onaindia,et al.  Learning action models with minimal observability , 2019, Artif. Intell..

[28]  Tom Silver,et al.  PDDLGym: Gym Environments from PDDL Problems , 2020, ArXiv.

[29]  Eyal Amir,et al.  Learning Partially Observable Deterministic Action Models , 2005, IJCAI.

[30]  Jack Minker,et al.  On Indefinite Databases and the Closed World Assumption , 1987, CADE.

[31]  Shirin Sohrabi,et al.  Plan Recognition as Planning Revisited , 2016, IJCAI.

[32]  Xuemei Wang,et al.  Learning by Observation and Practice: An Incremental Approach for Planning Operator Acquisition , 1995, ICML.

[33]  Javier Segovia Aguas,et al.  A review of generalized planning , 2019, The Knowledge Engineering Review.

[34]  Craig A. Knoblock,et al.  PDDL-the planning domain definition language , 1998 .

[35]  Yolanda Gil,et al.  Learning by Experimentation: Incremental Refinement of Incomplete Planning Domains , 1994, International Conference on Machine Learning.

[36]  Franz Baader,et al.  Unification theory , 1986, Decis. Support Syst..

[37]  Daniel Borrajo,et al.  Counterplanning using Goal Recognition and Landmarks , 2018, IJCAI.

[38]  T. L. McCluskey,et al.  Acquisition of Object-Centred Domain Models from Planning Examples , 2009, ICAPS.

[39]  Miquel Ramírez,et al.  Model Recognition as Planning , 2019, ICAPS.

[40]  Hector Geffner,et al.  Probabilistic Plan Recognition Using Off-the-Shelf Classical Planners , 2010, AAAI.