Evaluation and comparison of CNN visual explanations for histopathology
暂无分享,去创建一个
[1] Meyke Hermsen,et al. 1399 H&E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset , 2018, GigaScience.
[2] N. Arun,et al. Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging , 2020, medRxiv.
[3] Nasir Rajpoot,et al. PanNuke Dataset Extension, Insights and Baselines , 2020, ArXiv.
[4] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Henning Müller,et al. Visualizing and interpreting feature reuse of pretrained CNNs for histopathology , 2019 .
[7] Henning Müller,et al. Generalizing convolution neural networks on stain color heterogeneous data for computational pathology , 2020, Medical Imaging: Digital Pathology.
[8] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[9] Bolei Zhou,et al. Interpreting Deep Visual Representations via Network Dissection , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[10] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[11] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.