暂无分享,去创建一个
[1] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[2] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[3] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[4] David Duvenaud,et al. Explaining Image Classifiers by Counterfactual Generation , 2018, ICLR.
[5] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[6] Dumitru Erhan,et al. Evaluating Feature Importance Estimates , 2018, ArXiv.
[7] Ramprasaath R. Selvaraju,et al. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization , 2016 .
[8] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[9] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[10] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[11] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[12] Seong Joon Oh,et al. Evaluating Weakly Supervised Object Localization Methods Right , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[14] Thomas S. Huang,et al. Generative Image Inpainting with Contextual Attention , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[15] Kate Saenko,et al. RISE: Randomized Input Sampling for Explanation of Black-box Models , 2018, BMVC.
[16] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Kyunghyun Cho,et al. Classifier-agnostic saliency map extraction , 2018, AAAI.
[18] Been Kim,et al. BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth , 2019, ArXiv.
[19] Scott M. Lundberg,et al. Consistent Individualized Feature Attribution for Tree Ensembles , 2018, ArXiv.
[20] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[21] Klaus-Robert Müller,et al. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.
[22] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[23] Pengfei Xiong,et al. Deep Fusion Network for Image Completion , 2019, ACM Multimedia.
[24] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[25] Andrea Vedaldi,et al. Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[26] Yarin Gal,et al. Real Time Image Saliency for Black Box Classifiers , 2017, NIPS.
[27] Rodrigo Benenson,et al. Large-Scale Interactive Object Segmentation With Human Annotators , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[29] Lijie Fan,et al. Adversarial Localization Network , 2017 .
[30] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[32] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).