Automatic Infectious Disease Classification Analysis with Concept Discovery

Automatic infectious disease classification from images can facilitate needed medical diagnoses. Such an approach can identify diseases, like tuberculosis, which remain under-diagnosed due to resource constraints and also novel and emerging diseases, like monkeypox, which clinicians have little experience or acumen in diagnosing. Avoid-ing missed or delayed diagnoses would prevent further transmission and im-prove clinical outcomes. In order to understand and trust neural network predictions, analysis of learned representations is necessary. In this work, we argue that automatic discovery of concepts, i.e., human interpretable at-tributes, allows for a deep understanding of learned information in medical image analysis tasks, generalizing beyond the training labels or protocols. We provide an overview of existing concept discovery approaches in medical image and computer vision communi-ties, and evaluate representative methods on tuberculosis (TB) prediction and monkeypox prediction tasks. Finally, we propose NMFx, a general NMF for-mulation of interpretability by concept discovery that works in a unified way in unsupervised, weakly supervised, and supervised scenarios 1 .

[1]  Andres Felipe Posada-Moreno,et al.  ECLAD: Extracting Concepts with Local Aggregated Descriptors , 2022, ArXiv.

[2]  Xiaoxiao Li,et al.  Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements? , 2022, AAAI.

[3]  Max A. Viergever,et al.  Explainable artificial intelligence (XAI) in deep learning-based medical image analysis , 2021, Medical Image Anal..

[4]  William Eberle,et al.  Explainable Artificial Intelligence Approaches: A Survey , 2021, ArXiv.

[5]  Arun Das,et al.  Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey , 2020, ArXiv.

[6]  Amitojdeep Singh,et al.  Explainable Deep Learning Models in Medical Image Analysis , 2020, J. Imaging.

[7]  Paolo Giudici,et al.  Explainable AI in Fintech Risk Management , 2020, Frontiers in Artificial Intelligence.

[8]  A. Wong,et al.  COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images , 2020, Scientific Reports.

[9]  Jinjun Xiong,et al.  On Interpretability of Artificial Neural Networks: A Survey , 2020, IEEE Transactions on Radiation and Plasma Medical Sciences.

[10]  David Zhang,et al.  Lesion Location Attention Guided Network for Multi-Label Thoracic Disease Classification in Chest X-Rays , 2019, IEEE Journal of Biomedical and Health Informatics.

[11]  Klaus-Robert Müller,et al.  Resolving challenges in deep learning-based analyses of histopathological images using explanation methods , 2019, Scientific Reports.

[12]  Martin Weygandt,et al.  Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification , 2019, Front. Aging Neurosci..

[13]  Been Kim,et al.  BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth , 2019, ArXiv.

[14]  Quoc V. Le,et al.  EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.

[15]  Nassir Navab,et al.  Learning Interpretable Features via Adversarially Robust Optimization , 2019, MICCAI.

[16]  James Zou,et al.  Towards Automatic Concept-based Explanations , 2019, NeurIPS.

[17]  D. Erhan,et al.  A Benchmark for Interpretability Methods in Deep Neural Networks , 2018, NeurIPS.

[18]  Sabine Süsstrunk,et al.  Deep Feature Factorization For Concept Discovery , 2018, ECCV.

[19]  Tinne Tuytelaars,et al.  Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks , 2017, ICLR.

[20]  Anirban Sarkar,et al.  Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).

[21]  John F. Canny,et al.  Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[22]  Bolei Zhou,et al.  Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Hod Lipson,et al.  Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.

[24]  Thomas Brox,et al.  Striving for Simplicity: The All Convolutional Net , 2014, ICLR.

[25]  Stefan Jaeger,et al.  Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. , 2014, Quantitative imaging in medicine and surgery.

[26]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[27]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[28]  Johannes Gehrke,et al.  Intelligible models for classification and regression , 2012, KDD.

[29]  H. Sebastian Seung,et al.  Learning the parts of objects by non-negative matrix factorization , 1999, Nature.

[30]  Seungjin Choi,et al.  Semi-Supervised Nonnegative Matrix Factorization , 2010, IEEE Signal Processing Letters.

[31]  Jianguo Zhang,et al.  The PASCAL Visual Object Classes Challenge , 2006 .