Explainable image analysis for decision support in medical healthcare
暂无分享,去创建一个
Recent advances in medical imaging and deep learning have enabled the efficient analysis of large databases of images. Notable examples include the analysis of computed tomography (CT), magnetic resonance imaging (MRI), and X-ray. While the automatic classification of images has proven successful, adopting such a paradigm in the medical healthcare setting is unfeasible. Indeed, the physician in charge of the detailed medical assessment and diagnosis of patients cannot trust a deep learning model’s decisions without further explanations or insights about their classification outcome. In this study, rather than relying on classification, we propose a new method that leverages deep neural networks to extract a representation of images and further analyze them through clustering, dimensionality reduction for visualization, and class activation mapping. Thus, the system does not make decisions on behalf of physicians. Instead, it helps them make a diagnosis. Experimental results on lung images affected by Pneumonia and Covid-19 lesions show the potential of our method as a tool for decision support in a medical setting. It allows the physician to identify groups of similar images and highlight regions of the input that the model deemed important for its predictions.