Sanity Checks for Saliency Maps
暂无分享,去创建一个
Been Kim | Moritz Hardt | Ian J. Goodfellow | Justin Gilmer | Julius Adebayo | Michael Muelly | J. Gilmer | Moritz Hardt | Been Kim | Julius Adebayo | Michael Muelly | M. Muelly | I. Goodfellow
[1] J. Casillas. Interpretability issues in fuzzy modeling , 2003 .
[2] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[3] Pascal Vincent,et al. Visualizing Higher-Layer Features of a Deep Network , 2009 .
[4] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[5] Zhenghao Chen,et al. On Random Weights and Unsupervised Feature Learning , 2011, ICML.
[6] Paulo J. G. Lisboa,et al. Making machine learning models interpretable , 2012, ESANN.
[7] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[8] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[9] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[10] Cynthia Rudin,et al. Causal Falling Rule Lists , 2015, ArXiv.
[11] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[12] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[14] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[15] Andrea Vedaldi,et al. Salient Deconvolutional Networks , 2016, ECCV.
[16] Abhishek Das,et al. Grad-CAM: Why did you say that? , 2016, ArXiv.
[17] Ran Gilad-Bachrach,et al. Debugging Machine Learning Models , 2016 .
[18] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[19] Jure Leskovec,et al. Interpretable & Explorable Approximations of Black Box Models , 2017, ArXiv.
[20] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[21] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[22] Yoshua Bengio,et al. Understanding intermediate layers using linear classifier probes , 2016, ICLR.
[23] Alexander Binder,et al. Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[24] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[25] Yarin Gal,et al. Real Time Image Saliency for Black Box Classifiers , 2017, NIPS.
[26] David Weinberger,et al. Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.
[27] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[28] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[29] Markus H. Gross,et al. A unified view of gradient-based attribution methods for Deep Neural Networks , 2017, NIPS 2017.
[30] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[31] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[32] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[33] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[34] Yang Zhang,et al. A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations , 2018, ICML.
[35] Beomsu Kim,et al. Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative , 2018, ArXiv.
[36] Been Kim,et al. Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values , 2018, ICLR.
[37] Jeremy Tan,et al. Automatic Shadow Detection in 2D Ultrasound , 2018 .
[38] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[39] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[40] Cengiz Öztireli,et al. Towards better understanding of gradient-based attribution methods for Deep Neural Networks , 2017, ICLR.
[41] Andrea Vedaldi,et al. Deep Image Prior , 2017, International Journal of Computer Vision.
[42] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[43] Abubakar Abid,et al. Interpretation of Neural Networks is Fragile , 2017, AAAI.