Grounding LTLf Specifications in Images

A critical challenge in neuro-symbolic (NeSy) approaches is to handle the symbol grounding problem without direct supervision. That is mapping high-dimensional raw data into an interpretation over a finite set of abstract concepts with a known meaning, without using labels. In this work, we ground symbols into sequences of images by exploiting symbolic logical knowledge in the form of Linear Temporal Logic over finite traces (LTLf) formulas, and sequence-level labels expressing if a sequence of images is compliant or not with the given formula. Our approach is based on translating the LTLf formula into an equivalent deterministic finite automaton (DFA) and interpreting the latter in fuzzy logic. Experiments show that our system outperforms recurrent neural networks in sequence classification and can reach high image classification accuracy without being trained with any single-image label.

[1]  Sheila A. McIlraith,et al.  Noisy Symbolic Abstractions for Deep RL: A case study with Reward Machines , 2022, ArXiv.

[2]  D. Neider,et al.  Reinforcement Learning with Stochastic Reward Machines , 2022, AAAI.

[3]  Giuseppe De Giacomo,et al.  Markov Abstractions for PAC Reinforcement Learning in Non-Markov Decision Processes , 2022, IJCAI.

[4]  Christos K. Verginis,et al.  Joint Learning of Reward Machines and Policies in Environments with Partially Known Semantics , 2022, ArXiv.

[5]  Daniel Ritter,et al.  Learning Finite Linear Temporal Logic Specifications with a Specialized Neural Operator , 2021, ArXiv.

[6]  Luc De Raedt,et al.  DeepStochLog: Neural Stochastic Logic Programming , 2021, AAAI.

[7]  Giuseppe De Giacomo,et al.  Compositional Approach to Translate LTLf/LDLf into Deterministic Finite Automata , 2021, ICAPS.

[8]  Julien Mairal,et al.  Emerging Properties in Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[9]  A. Abate,et al.  Modular Deep Reinforcement Learning for Continuous Motion Planning With Temporal Logic , 2021, IEEE Robotics and Automation Letters.

[10]  Harold Soh,et al.  Embedding Symbolic Temporal Knowledge into Deep Sequential Models , 2021, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Luciano Serafini,et al.  Logic Tensor Networks , 2020, Artif. Intell..

[12]  Frederik K. Drachmann,et al.  Planning From Pixels in Atari With Learned Symbolic Representations , 2020, AAAI.

[13]  Loizos Michael,et al.  Neural-Symbolic Integration: A Compositional Perspective , 2020, AAAI.

[14]  M. Röglinger,et al.  Shedding Light on Blind Spots: Developing a Reference Architecture to Leverage Video Data for Process Mining , 2020, Decis. Support Syst..

[15]  Zhen Kan,et al.  Reinforcement Learning Based Temporal Logic Control with Maximum Probabilistic Satisfaction , 2020, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[16]  Joohyung Lee,et al.  NeurASP: Embracing Neural Networks into Answer Set Programming , 2020, IJCAI.

[17]  Zhe Xu,et al.  Active Finite Reward Automaton Inference and Reinforcement Learning Using Queries and Counterexamples , 2020, CD-MAKE.

[18]  Pierre H. Richemond,et al.  Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.

[19]  Frank van Harmelen,et al.  Analyzing Differentiable Fuzzy Implications , 2020, KR.

[20]  Andrei Barbu,et al.  Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[21]  Stephen L. Smith,et al.  Continuous Motion Planning with Temporal Logic Specifications using Deep Neural Networks , 2020, ArXiv.

[22]  Alberto Camacho,et al.  Towards Neural-Guided Program Synthesis for Linear Temporal Logic Specifications , 2019, ArXiv.

[23]  Ronen I. Brafman,et al.  Reinforcement Learning with Non-Markovian Rewards , 2019, AAAI.

[24]  Moshe Y. Vardi,et al.  Hybrid Compositional Reasoning for Reactive Synthesis from Finite-Horizon Specifications , 2019, AAAI.

[25]  Kuldeep S. Meel,et al.  Embedding Symbolic Knowledge into Deep Networks , 2019, NeurIPS.

[26]  Alberto Camacho,et al.  LTL and Beyond: Formal Languages for Reward Function Specification in Reinforcement Learning , 2019, IJCAI.

[27]  Lydia E. Kavraki,et al.  Efficient Symbolic Reactive Synthesis for Finite-Horizon Tasks , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[28]  Marco Gori,et al.  LYRICS: A General Interface Layer to Integrate Logic Inference and Deep Learning , 2019, ECML/PKDD.

[29]  Giuseppe De Giacomo,et al.  Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications , 2018, ICAPS.

[30]  Luc De Raedt,et al.  DeepProbLog: Neural Probabilistic Logic Programming , 2018, BNAIC/BENELEARN.

[31]  Guy Van den Broeck,et al.  A Semantic Loss Function for Deep Learning with Symbolic Knowledge , 2017, ICML.

[32]  Artur S. d'Avila Garcez,et al.  Logic Tensor Networks for Semantic Image Interpretation , 2017, IJCAI.

[33]  Geguang Pu,et al.  Symbolic LTLf Synthesis , 2017, IJCAI.

[34]  Ufuk Topcu,et al.  Environment-Independent Task Specifications via GLTL , 2017, ArXiv.

[35]  Alex S. Fukunaga,et al.  Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary , 2017, AAAI.

[36]  Marco Gori,et al.  Semantic-based regularization for learning and inference , 2017, Artif. Intell..

[37]  Ben Poole,et al.  Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.

[38]  Stefano Ermon,et al.  Label-Free Supervision of Neural Networks with Physics and Domain Knowledge , 2016, AAAI.

[39]  Johannes De Smedt,et al.  A Full R/I-Net Construct Lexicon for Declare Constraints , 2015 .

[40]  Marco Montali,et al.  Monitoring Business Metaconstraints Based on LTL and LDL for Finite Traces , 2014, BPM.

[41]  Giuseppe De Giacomo,et al.  Linear Temporal Logic and Linear Dynamic Logic on Finite Traces , 2013, IJCAI.

[42]  Michael Westergaard,et al.  Better Algorithms for Analyzing and Enacting Declarative Workflow Languages Using LTL , 2011, BPM.

[43]  Adnan Darwiche,et al.  Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence SDD: A New Canonical Representation of Propositional Knowledge Bases , 2022 .

[44]  Wil M. P. van der Aalst,et al.  DECLARE: Full Support for Loosely-Structured Processes , 2007, 11th IEEE International Enterprise Distributed Object Computing Conference (EDOC 2007).

[45]  Wil M. P. van der Aalst,et al.  A Declarative Approach for Flexible Business Processes Management , 2006, Business Process Management Workshops.

[46]  Giuseppe De Giacomo,et al.  Learning a Symbolic Planning Domain through the Interaction with Continuous Environments , 2021 .

[47]  S. Muggleton,et al.  Fast Abductive Learning by Similarity-based Consistency Optimization , 2021, NeurIPS.

[48]  Zhi-Hua Zhou,et al.  Bridging Machine Learning and Logical Reasoning by Abductive Learning , 2019, NeurIPS.

[49]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[50]  L. Steels The symbol grounding problem has been solved, so what’s next? , 2008 .

[51]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[52]  Fred Kröger,et al.  Temporal Logic of Programs , 1987, EATCS Monographs on Theoretical Computer Science.