A Semantic Framework for Neural-Symbolic Computing

Two approaches to AI, neural networks and symbolic systems, have been proven very successful for an array of AI problems. However, neither has been able to achieve the general reasoning ability required for human-like intelligence. It has been argued that this is due to inherent weaknesses in each approach. Luckily, these weaknesses appear to be complementary, with symbolic systems being adept at the kinds of things neural networks have trouble with and vice-versa. The field of neural-symbolic AI attempts to exploit this asymmetry by combining neural networks and symbolic AI into integrated systems. Often this has been done by encoding symbolic knowledge into neural networks. Unfortunately, although many different methods for this have been proposed, there is no common definition of an encoding to compare them. We seek to rectify this problem by introducing a semantic framework for neural-symbolic AI, which is then shown to be general enough to account for a large family of neural-symbolic systems. We provide a number of examples and proofs of the application of the framework to the neural encoding of various forms of knowledge representation and neural network. These, at first sight disparate approaches, are all shown to fall within the framework's formal definition of what we call semantic encoding for neural-symbolic AI.

[1]  Luis C. Lamb,et al.  Neurosymbolic AI: the 3rd wave , 2020, Artificial Intelligence Review.

[2]  A. Garcez,et al.  Extracting Meaningful High-Fidelity Knowledge from Convolutional Neural Networks , 2022, 2022 International Joint Conference on Neural Networks (IJCNN).

[3]  Caleb Kisby,et al.  The Logic of Hebbian Learning , 2022, FLAIRS.

[4]  Katsumi Inoue,et al.  Learning First-Order Rules with Differentiable Logic Program Semantics , 2022, IJCAI.

[5]  Alexander G. Gray,et al.  Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks , 2021, AAAI.

[6]  Frank van Harmelen,et al.  Analyzing Differentiable Fuzzy Logic Operators , 2020, Artif. Intell..

[7]  Thomas Lukasiewicz,et al.  Multi-Label Classification Neural Networks with Hard Logical Constraints , 2021, J. Artif. Intell. Res..

[8]  Pablo Barceló,et al.  Logical Expressiveness of Graph Neural Networks , 2019 .

[9]  Marco Maggini,et al.  Relational Neural Machines , 2020, ECAI.

[10]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[11]  Chuang Gan,et al.  Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding , 2018, NeurIPS.

[12]  Luc De Raedt,et al.  DeepProbLog: Neural Probabilistic Logic Programming , 2018, BNAIC/BENELEARN.

[13]  Guy Van den Broeck,et al.  A Semantic Loss Function for Deep Learning with Symbolic Knowledge , 2017, ICML.

[14]  Richard Evans,et al.  Learning Explanatory Rules from Noisy Data , 2017, J. Artif. Intell. Res..

[15]  Tim Rocktäschel,et al.  End-to-end Differentiable Proving , 2017, NIPS.

[16]  Marco Gori,et al.  Semantic-based regularization for learning and inference , 2017, Artif. Intell..

[17]  Tomaso A. Poggio,et al.  When and Why Are Deep Networks Better Than Shallow Ones? , 2017, AAAI.

[18]  Murray Shanahan,et al.  Towards Deep Symbolic Reinforcement Learning , 2016, ArXiv.

[19]  Max Tegmark,et al.  Why Does Deep and Cheap Learning Work So Well? , 2016, Journal of Statistical Physics.

[20]  Artur S. d'Avila Garcez,et al.  Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge , 2016, NeSy@HLAI.

[21]  Luc De Raedt,et al.  Statistical Relational Artificial Intelligence: Logic, Probability, and Computation , 2016, Statistical Relational Artificial Intelligence.

[22]  Jianfeng Gao,et al.  Basic Reasoning with Tensor Product Representations , 2016, ArXiv.

[23]  Karin Ackermann,et al.  Labelled Deductive Systems , 2016 .

[24]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[25]  Ramanathan V. Guha,et al.  Towards a Model Theory for Distributed Representations , 2014, AAAI Spring Symposia.

[26]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[27]  Artur S. d'Avila Garcez,et al.  Fast relational learning using bottom clause propositionalization with artificial neural networks , 2013, Machine Learning.

[28]  Dov M. Gabbay,et al.  Neural-Symbolic Cognitive Reasoning , 2008, Cognitive Technologies.

[29]  Ekaterina Komendantskaya,et al.  Neurons or Symbols - Why does OR Remain Exclusive? , 2009, IJCCI.

[30]  Andreas Witzel,et al.  A Fully Connectionist Model Generator for Covered First-Order Logic Programs , 2007, IJCAI.

[31]  Ekaterina Komendantskaya,et al.  Connectionist Representation of Multi-Valued Logic Programs , 2007, Perspectives of Neural-Symbolic Integration.

[32]  Sebastian Bader,et al.  The Core Method: Connectionist Model Generation , 2006, ICANN.

[33]  Matthew Richardson,et al.  Markov logic networks , 2006, Machine Learning.

[34]  Anthony G. Cohn,et al.  Proceedings of the 19th national conference on Artifical intelligence , 2004 .

[35]  Artur S. d'Avila Garcez,et al.  The Connectionist Inductive Learning and Logic Programming System , 1999, Applied Intelligence.

[36]  Steffen Hölldobler,et al.  Approximating the Semantics of Logic Programs by Recurrent Neural Networks , 1999, Applied Intelligence.

[37]  Krysia Broda,et al.  Neural-symbolic learning systems - foundations and applications , 2012, Perspectives in neural computing.

[38]  Melvin Fitting,et al.  Fixpoint Semantics for Logic Programming a Survey , 2001, Theor. Comput. Sci..

[39]  Hannes Leitgeb,et al.  Nonmonotonic reasoning by inhibition nets , 2001, Artif. Intell..

[40]  David H. Wolpert,et al.  No free lunch theorems for optimization , 1997, IEEE Trans. Evol. Comput..

[41]  San Cristóbal Mateo,et al.  The Lack of A Priori Distinctions Between Learning Algorithms , 1996 .

[42]  Gadi Pinkas,et al.  Reasoning, Nonmonotonicity and Learning in Connectionist Networks that Capture Propositional Knowledge , 1995, Artif. Intell..

[43]  Jude W. Shavlik,et al.  Knowledge-Based Artificial Neural Networks , 1994, Artif. Intell..

[44]  Steffen Hölldobler,et al.  Towards a New Massively Parallel Computational Model for Logic Programming , 1994 .

[45]  S. Sajami,et al.  Representation and reality , 1993 .

[46]  Geoffrey E. Hinton Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1991 .

[47]  W S McCulloch,et al.  A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.

[48]  A. Hodgkin,et al.  A quantitative description of membrane current and its application to conduction and excitation in nerve , 1990, Bulletin of mathematical biology.

[49]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[50]  P. Smolensky On the proper treatment of connectionism , 1988, Behavioral and Brain Sciences.

[51]  John McCarthy,et al.  Epistemological challenges for connectionism , 1988, Behavioral and Brain Sciences.

[52]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[53]  Hans Hermes,et al.  Introduction to mathematical logic , 1973, Universitext.