An Approach to Explainable AI for Digital Pathology

Many medical diagnostics are based, at least, in part on medical imaging. The development of machine learning and, in particular Deep Learning (DL) based image processing in the last decade has led to the growth of diagnostic support aids based on these technologies. A problem regarding the adoption of this systems the lack of understandability of their diagnostic suggestions due to their Blackbox nature. Several approaches have been proposed to increase their explainability including evaluation of the internal layer contributions to outputs, network modifications to make these contributions more meaningful and model agnostic explanations. Medical systems are considered the paradigmatic case where understandability is of outmost importance. Digital Pathology (DP) is an especially difficult, but especially interesting case for image based diagnostic support aids. This is due, among other factors, to the fact that DP images are very large and multidimensional with the information not easily available at first sight. It is important to develop tools that let the pathologists apply their available knowledge easily while improving the diagnostic quality and their productivity. The design and evaluation of an interpretable digital pathology diagnosis aid would open the possibility for developing and deploying larger scale systems that would provide pathologists with reliable and trustworthy tools to help them in their daily diagnosis tasks. Keywords—Digital Pathology; Explainable Artificial Inteligence; Deep learning;

[1]  C.-C. Jay Kuo,et al.  Interpretable Convolutional Neural Networks via Feedforward Design , 2018, J. Vis. Commun. Image Represent..

[2]  F. Sardanelli,et al.  Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States , 2018, Insights into Imaging.

[3]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[4]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[5]  Benjamin J. Raphael,et al.  Visible Machine Learning for Biomedicine , 2018, Cell.

[6]  Andreas Holzinger,et al.  Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology , 2017, ArXiv.

[7]  Anant Madabhushi,et al.  Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent , 2017, Scientific Reports.

[8]  Anastassia Lauterbach Artificial intelligence and policy: quo vadis? , 2019, Digital Policy, Regulation and Governance.

[9]  Dayong Wang,et al.  Deep Learning for Identifying Metastatic Breast Cancer , 2016, ArXiv.

[10]  Andrew H. Beck,et al.  Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer , 2017, JAMA.

[11]  David F. Steiner,et al.  Impact of Deep Learning Assistance on the Histopathologic Review of Lymph Nodes for Metastatic Breast Cancer , 2018, The American journal of surgical pathology.