RISE: Randomized Input Sampling for Explanation of Black-box Models
暂无分享,去创建一个
[1] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[2] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[3] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[4] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[6] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[7] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[8] Wei Xu,et al. Look and Think Twice: Capturing Top-Down Visual Attention with Feedback Convolutional Neural Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[9] T. Lombrozo. The Instrumental Value of Explanations , 2011 .
[10] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[11] W. R. Swartout. PRODUCING EXPLANATIONS AND JUSTIFICATIONS OF EXPERT CONSULTING PROGRAMS , 1981 .
[12] Thomas Brox,et al. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks , 2016, NIPS.
[13] Sebastian Thrun,et al. Extracting Rules from Artifical Neural Networks with Distributed Representations , 1994, NIPS.
[14] Trevor Darrell,et al. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[15] Luc Van Gool,et al. The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.
[16] Yarin Gal,et al. Real Time Image Saliency for Black Box Classifiers , 2017, NIPS.
[17] T. Lombrozo. The structure and function of explanations , 2006, Trends in Cognitive Sciences.
[18] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[19] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[20] Johanna D. Moore,et al. Explanation in second generation expert systems , 1993 .
[21] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[22] Bernease Herman,et al. The Promise and Peril of Human Evaluation for Model Interpretability , 2017, ArXiv.
[23] Zhe L. Lin,et al. Top-Down Neural Attention by Excitation Backprop , 2016, International Journal of Computer Vision.
[24] Kate Saenko,et al. Top-Down Visual Saliency Guided by Captions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Dan Klein,et al. Learning to Compose Neural Networks for Question Answering , 2016, NAACL.
[26] Trevor Darrell,et al. Generating Visual Explanations , 2016, ECCV.
[27] Trevor Darrell,et al. Long-term recurrent convolutional networks for visual recognition and description , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[29] Trevor Darrell,et al. Learning to Reason: End-to-End Module Networks for Visual Question Answering , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[30] Mark Craven,et al. Extracting comprehensible models from trained neural networks , 1996 .
[31] Regina A. Pomranky,et al. The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..
[32] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.