暂无分享,去创建一个
[1] Surabhi Bhargava,et al. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology , 2017, IEEE Transactions on Medical Imaging.
[2] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[3] I. Ellis,et al. Pathological prognostic factors in breast cancer. , 1999, Critical reviews in oncology/hematology.
[4] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[5] Martin Wattenberg,et al. TCAV: Relative concept importance testing with Linear Concept Activation Vectors , 2018 .
[6] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[7] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Robert M. Haralick,et al. Textural Features for Image Classification , 1973, IEEE Trans. Syst. Man Cybern..
[9] S. Zinger,et al. AUTOMATED DETECTION AND CLASSIFICATION OF CANCER METASTASES IN WHOLE-SLIDE HISTOPATHOLOGY IMAGES USING DEEP LEARNING , 2017 .
[10] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[11] I. Ellis,et al. Pathological prognostic factors in breast cancer. I. The value of histological grade in breast cancer: experience from a large study with long-term follow-up. , 2002, Histopathology.
[12] Peter H. N. de With,et al. Cancer detection in histopathology whole-slide images using conditional random fields on deep embedded spaces , 2018, Medical Imaging.
[13] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[14] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[15] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[16] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[17] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.