Iteratively unveiling new regions of interest in Deep Learning models

Recent advance of deep learning has been transforming the landscape in many domains. However, understanding the predictions of a deep network remains a challenge, which is especially sensitive in health care domains as interpretability is key. Techniques that rely on saliency maps -highlighting the region of an image that influence the classifier’s decision the mostare often used for that purpose. However, gradients fluctuation make saliency maps noisy and thus difficult to interpret at a human level. Moreover, models tend to focus on one particular influential region of interest (ROI) in the image, even though other regions might be relevant for the decision. We propose a new framework that refines those saliency maps to generate segmentation masks over the ROI on the initial image. In a second contribution, we propose to apply those masks over the original inputs, then evaluate our classifier on the masked inputs to identify previously overlooked ROI. This iterative procedure allows us to emphasize new region of interests by extracting meaningful information from the saliency maps.