On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI - Three Challenges for Future Research

Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations.

[1]  John Haugeland,et al.  Artificial intelligence - the very idea , 1987 .

[2]  Masayuki Numao,et al.  Explainable Cross-Domain Recommendations Through Relational Learning , 2018, AAAI.

[3]  Alex Pentland,et al.  Fair, Transparent, and Accountable Algorithmic Decision-making Processes , 2017, Philosophy & Technology.

[4]  Heiner Stuckenschmidt,et al.  Marrying Uncertainty and Time in Knowledge Graphs , 2017, AAAI.

[5]  Chris Russell,et al.  Explaining Explanations in AI , 2018, FAT.

[6]  George A. Miller WordNet: A Lexical Database for English , 1992, HLT.

[7]  S. Banerjee A Semantic Web Based Ontology in the Financial Domain , 2013 .

[8]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[9]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[10]  Fei-Yue Wang,et al.  Traffic Flow Prediction With Big Data: A Deep Learning Approach , 2015, IEEE Transactions on Intelligent Transportation Systems.

[11]  G. Kane Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol 1: Foundations, vol 2: Psychological and Biological Models , 1994 .

[12]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[13]  Hang-Bong Kang,et al.  Prediction of crime occurrence from multi-modal data using deep learning , 2017, PloS one.

[14]  Suresh Venkatasubramanian,et al.  Auditing Black-box Models by Obscuring Features , 2016, ArXiv.

[15]  Yue Zhang,et al.  Deep Learning for Event-Driven Stock Prediction , 2015, IJCAI.

[16]  J. Gagné Literature Review , 2018, Journal of ultrasound in medicine : official journal of the American Institute of Ultrasound in Medicine.

[17]  Pierre Baldi,et al.  Deep Learning, Dark Knowledge, and Dark Matter , 2014, HEPML@NIPS.

[18]  David Sánchez,et al.  Semantic Clustering Using Multiple Ontologies , 2010, CCIA.

[19]  Tim Miller,et al.  Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.

[20]  Peter Bloem,et al.  The Knowledge Graph as the Default Data Model for Machine Learning , 2017 .

[21]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.

[22]  Helmut Krcmar,et al.  Semantic Web Technologies for Explainable Machine Learning Models: A Literature Review , 2019, PROFILES/SEMEX@ISWC.

[23]  Percy Liang,et al.  Understanding Black-box Predictions via Influence Functions , 2017, ICML.

[24]  Chunhua Shen,et al.  Explicit Knowledge-based Reasoning for Visual Question Answering , 2015, IJCAI.

[25]  Núria Queralt-Rosinach,et al.  Data Science and symbolic AI: Synergies, challenges and opportunities , 2017, Data Sci..

[26]  Günter Klambauer,et al.  DeepTox: Toxicity Prediction using Deep Learning , 2016, Front. Environ. Sci..

[27]  Pascal Hitzler,et al.  Explaining Trained Neural Networks with Semantic Web Technologies: First Steps , 2017, NeSy.

[28]  Enrico Bertini,et al.  Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations , 2017, HILDA@SIGMOD.

[29]  Yair Zick,et al.  Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).

[30]  Plamen Angelov,et al.  Towards Explainable Deep Neural Networks (xDNN) , 2019, Neural Networks.

[31]  Freddy Lécué,et al.  On The Role of Knowledge Graphs in Explainable AI , 2020, PROFILES/SEMEX@ISWC.

[32]  Wolfram Wöß,et al.  Towards a Definition of Knowledge Graphs , 2016, SEMANTiCS.

[33]  Cynthia Rudin,et al.  The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification , 2014, NIPS.

[34]  Ramprasaath R. Selvaraju,et al.  Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance , 2018, ECCV.

[35]  Roy Assaf,et al.  Explainable Deep Neural Networks for Multivariate Time Series Predictions , 2019, IJCAI.

[36]  Tom Heath,et al.  Linked Data: Evolving the Web into a Global Data Space , 2011, Linked Data.

[37]  Leo Breiman,et al.  Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author) , 2001 .

[38]  Chong-Wah Ngo,et al.  Interpretable Multimodal Retrieval for Fashion Products , 2018, ACM Multimedia.

[39]  Juliette Dibie-Barthélemy,et al.  Interactive Causal Discovery in Knowledge Graphs , 2019, PROFILES/SEMEX@ISWC.

[40]  Jens Lehmann,et al.  DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia , 2015, Semantic Web.

[41]  Sepp Hochreiter,et al.  Toxicity Prediction using Deep Learning , 2015, ArXiv.

[42]  Carlos Eduardo Scheidegger,et al.  Assessing the Local Interpretability of Machine Learning Models , 2019, ArXiv.

[43]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[44]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[45]  Michel Dumontier,et al.  Bio2RDF Release 3: A larger, more connected network of Linked Data for the Life Sciences , 2014, SEMWEB.

[46]  Been Kim,et al.  Sanity Checks for Saliency Maps , 2018, NeurIPS.

[47]  Girish Keshav Palshikar,et al.  Employee churn prediction , 2011, Expert Syst. Appl..

[48]  Thomas R. Gruber,et al.  Toward principles for the design of ontologies used for knowledge sharing? , 1995, Int. J. Hum. Comput. Stud..

[49]  Lars Niklasson,et al.  The Truth is In There - Rule Extraction from Opaque Models Using Genetic Programming , 2004, FLAIRS.

[50]  Pompeu Casanovas,et al.  Semantic Web for the Legal Domain: The next step , 2016, Semantic Web.

[51]  Huajun Chen,et al.  Human-centric Transfer Learning Explanation via Knowledge Graph [Extended Abstract] , 2019, ArXiv.

[52]  Mohammad Al Hasan,et al.  Link prediction using supervised learning , 2006 .

[53]  Derek Doran,et al.  What Does Explainable AI Really Mean? A New Conceptualization of Perspectives , 2017, CEx@AI*IA.

[54]  Mohammad Mansouri,et al.  An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets , 2018, Nature Biomedical Engineering.