XAI-KG: knowledge graph to support XAI and decision-making in manufacturing

The increasing adoption of artificial intelligence requires accurate forecasts and means to understand the reasoning of artificial intelligence models behind such a forecast. Explainable Artificial Intelligence (XAI) aims to provide cues for why a model issued a certain prediction. Such cues are of utmost importance to decision-making since they provide insights on the features that influenced most certain forecasts and let the user decide if the forecast can be trusted. Though many techniques were developed to explain black-box models, little research was done on assessing the quality of those explanations and their influence on decision-making. We propose an ontology and knowledge graph to support collecting feedback regarding forecasts, forecast explanations, recommended decision-making options, and user actions. This way, we provide means to improve forecasting models, explanations, and recommendations of decision-making options. We tailor the knowledge graph for the domain of demand forecasting and validate it on real-world data.

[1]  Dimitris Kiritsis,et al.  The Industrial Ontologies Foundry Proof-of-Concept Project , 2018, APMS.

[2]  Franco Turini,et al.  Open the Black Box Data-Driven Explanation of Black Box Decision Systems , 2018, ArXiv.

[3]  Oliver Niggemann,et al.  Integrating semantics for diagnosis of manufacturing systems , 2016, 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA).

[4]  Mike Uschold,et al.  Building Ontologies: Towards a Unified Methodology , 1996 .

[5]  R. Bellamy,et al.  Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers , 2021 .

[6]  Pingyu Jiang,et al.  Manufacturing Knowledge Graph: A Connectivism to Answer Production Problems Query With Knowledge Reuse , 2019, IEEE Access.

[7]  Asunción Gómez-Pérez,et al.  METHONTOLOGY: From Ontological Art Towards Ontological Engineering , 1997, AAAI 1997.

[8]  Jianlong Zhou,et al.  Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics , 2021, Electronics.

[9]  Nan Hu,et al.  SHERLOCK: Simple Human Experiments Regarding Locally Observed Collective Knowledge , 2015 .

[10]  Li Da Xu,et al.  Industry 4.0: state of the art and future trends , 2018, Int. J. Prod. Res..

[11]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[12]  James L. Mathieson,et al.  Complexity Metrics for Directional Node-Link System Representations: Theory and Applications , 2010 .

[13]  Dimitris Kiritsis,et al.  Actionable cognitive twins for decision making in manufacturing , 2021, Int. J. Prod. Res..

[14]  Michael Grüninger,et al.  An ontology of quality for enterprise modelling , 1995, Proceedings 4th IEEE Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET ICE '95).

[15]  Martin L. King,et al.  Towards a Methodology for Building Ontologies , 1995 .

[16]  Jorge Martínez Gil,et al.  Knowledge Graphs in Manufacturing and Production: A Systematic Literature Review , 2020, IEEE Access.

[17]  Xun Xue,et al.  A Survey of Data-Driven and Knowledge-Aware eXplainable AI , 2020, IEEE Transactions on Knowledge and Data Engineering.

[18]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[19]  Dunja Mladenic,et al.  Curious Cat--Mobile, Context-Aware Conversational Crowdsourcing Knowledge Acquisition , 2017, ACM Trans. Inf. Syst..

[20]  Jasper van der Waa,et al.  Evaluating XAI: A comparison of rule-based and example-based explanations , 2021, Artif. Intell..