暂无分享,去创建一个
Tolga Bolukbasi | Fernanda B. Viégas | Andrei Kapishnikov | Michael Terry | F. Viégas | Michael Terry | Tolga Bolukbasi | A. Kapishnikov
[1] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[2] Yarin Gal,et al. Real Time Image Saliency for Black Box Classifiers , 2017, NIPS.
[3] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[5] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[6] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[7] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[8] Markus H. Gross,et al. A unified view of gradient-based attribution methods for Deep Neural Networks , 2017, NIPS 2017.
[9] Daniel P. Huttenlocher,et al. Efficient Graph-Based Image Segmentation , 2004, International Journal of Computer Vision.
[10] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[11] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[12] Ankur Taly,et al. Exploring Principled Visualizations for Deep Network Attributions , 2019, IUI Workshops.
[13] Cengiz Öztireli,et al. Towards better understanding of gradient-based attribution methods for Deep Neural Networks , 2017, ICLR.
[14] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[16] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[17] Ankur Taly,et al. A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values , 2018, ArXiv.
[18] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[19] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[20] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[21] Jonas Mueller,et al. What made you do this? Understanding black-box decisions with sufficient input subsets , 2018, AISTATS.
[22] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[23] Alexander Binder,et al. Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..
[24] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[25] Vineeth N. Balasubramanian,et al. Neural Network Attributions: A Causal Perspective , 2019, ICML.
[26] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[27] Abubakar Abid,et al. Interpretation of Neural Networks is Fragile , 2017, AAAI.
[28] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.