Neural-Symbolic Integration for Interactive Learning and Conceptual Grounding

We propose neural-symbolic integration for abstract concept explanation and interactive learning. Neural-symbolic integration and explanation allow users and domain-experts to learn about the data-driven decision making process of large neural models. The models are queried using a symbolic logic language. Interaction with the user then confirms or rejects a revision of the neural model using logic-based constraints that can be distilled into the model architecture. The approach is illustrated using the Logic Tensor Network framework alongside Concept Activation Vectors and applied to a Convolutional Neural Network.

[1]  Martin Wattenberg,et al.  Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.

[2]  Antonio Vetro,et al.  On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI - Three Challenges for Future Research , 2020, Inf..

[3]  Bolei Zhou,et al.  Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[5]  Henning Müller,et al.  Regression Concept Vectors for Bidirectional Explanations in Histopathology , 2018, MLCN/DLF/iMIMIC@MICCAI.

[6]  Luciano Serafini,et al.  Logic Tensor Networks , 2020, Artif. Intell..

[7]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Artur d'Avila Garcez,et al.  Neural-Symbolic Integration for Fairness in AI , 2021, AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.

[9]  GiannottiFosca,et al.  A Survey of Methods for Explaining Black Box Models , 2018 .

[10]  Hrituraj Singh,et al.  Exploring Neural Models for Parsing Natural Language into First-Order Logic , 2020, ArXiv.

[11]  L. F. Molerio-Leon,et al.  Survey Methods , 2011 .

[12]  Artur d'Avila Garcez,et al.  Layerwise Knowledge Extraction from Deep Convolutional Networks , 2020, ArXiv.

[13]  Tarek R. Besold,et al.  A historical perspective of explainable Artificial Intelligence , 2020, WIREs Data Mining Knowl. Discov..

[14]  Artur S. d'Avila Garcez,et al.  Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge , 2016, NeSy@HLAI.