Explaining Image Classifiers by Counterfactual Generation
暂无分享,去创建一个
David Duvenaud | Chun-Hao Chang | Elliot Creager | Anna Goldenberg | D. Duvenaud | A. Goldenberg | Elliot Creager | Chun-Hao Chang | C. Chang
[1] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[2] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[3] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[4] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[5] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[6] Abhishek Das,et al. Grad-CAM: Why did you say that? , 2016, ArXiv.
[7] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[8] Alex Kendall,et al. Concrete Dropout , 2017, NIPS.
[9] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[10] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[11] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[12] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[13] Zhe L. Lin,et al. Top-Down Neural Attention by Excitation Backprop , 2016, International Journal of Computer Vision.
[14] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[15] Yarin Gal,et al. Real Time Image Saliency for Black Box Classifiers , 2017, NIPS.
[16] Hiroshi Ishikawa,et al. Globally and locally consistent image completion , 2017, ACM Trans. Graph..
[17] Thomas S. Huang,et al. Generative Image Inpainting with Contextual Attention , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[19] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.