A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act

This study discusses the interplay between metrics used to measure the explainability of the AI systems and the proposed EU Artificial Intelligence Act. A standardisation process is ongoing: several entities (e.g. ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act and explainability metrics play a significant role. This study identifies the requirements that such a metric should possess to ease compliance with the AI Act. It does so according to an interdisciplinary approach, i.e. by departing from the philosophical concept of explainability and discussing some metrics proposed by scholars and standardisation entities through the lenses of the explainability obligations set by the proposed AI Act. Our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible & accessible. This is why we discuss the extent to which these requirements are met by the metrics currently under discussion.

[1]  G. Brun Explication as a Method of Conceptual Re-engineering , 2016 .

[2]  Julia Powles,et al.  "Meaningful Information" and the Right to Explanation , 2017, FAT.

[3]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[4]  Jure Leskovec,et al.  Interpretable & Explorable Approximations of Black Box Models , 2017, ArXiv.

[5]  Andreas Holzinger,et al.  Measuring the Quality of Explanations: The System Causability Scale (SCS) , 2020, KI - Künstliche Intelligenz.

[6]  John H. Holland,et al.  Induction: Processes of Inference, Learning, and Discovery , 1987, IEEE Expert.

[7]  Luca Longo,et al.  A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence , 2020, AICS.

[8]  C. Hempel,et al.  Studies in the Logic of Explanation , 1948, Philosophy of Science.

[9]  Gary Klein,et al.  Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.

[10]  Martin Ebers,et al.  Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s) , 2021, SSRN Electronic Journal.

[11]  J. Woodward,et al.  Scientific Explanation and the Causal Structure of the World , 1988 .

[12]  P. Hacker,et al.  Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond , 2020, xxAI@ICML.

[13]  W. H. F. Barnes The Nature of Explanation , 1944, Nature.

[14]  Sophie Pfeifer,et al.  Theories Of Explanation , 2016 .

[15]  D. Hilton Mental Models and Causal Explanation: Judgements of Probable Cause and Explanatory Relevance , 1996 .

[16]  Mandy Eberhart,et al.  The Scientific Image , 2016 .

[17]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[18]  An-phi Nguyen,et al.  On quantitative aspects of model interpretability , 2020, ArXiv.

[19]  Jianlong Zhou,et al.  Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics , 2021, Electronics.

[20]  Fabio Vitali,et al.  An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability , 2021, ArXiv.

[21]  Francisco Herrera,et al.  Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.

[22]  Wilfrid S. Sellars,et al.  PHILOSOPHY AND THE SCIENTIFIC IMAGE OF MAN , 2007 .

[23]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[24]  Eoin Delaney,et al.  If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques , 2021, IJCAI.