Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
暂无分享,去创建一个
Markus H. Gross | Cengiz Öztireli | Marco Ancona | C. Öztireli | M. Gross | Marco Ancona | M. Gross | Markus H. Gross | Cengiz Öztireli
[1] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[2] L. Shapley. A Value for n-person Games , 1988 .
[3] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[4] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[5] Tom Minka,et al. A family of algorithms for approximate Bayesian inference , 2001 .
[6] Xavier Boyen,et al. Tractable Inference for Complex Stochastic Processes , 1998, UAI.
[7] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[8] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[9] Alexander Binder,et al. Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..
[10] Tomomi Matsui,et al. NP-completeness for calculating power indices of weighted majority games , 2001, Theor. Comput. Sci..
[11] Scott M. Lundberg,et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery , 2018, Nature Biomedical Engineering.
[12] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[13] Max A. Little,et al. Accurate Telemonitoring of Parkinson's Disease Progression by Noninvasive Speech Tests , 2009, IEEE Transactions on Biomedical Engineering.
[14] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[15] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[16] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[17] Nicholas R. Jennings,et al. A linear approximation method for the Shapley value , 2008, Artif. Intell..
[18] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[19] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[20] Rushil Anirudh,et al. Understanding Deep Neural Networks through Input Uncertainties , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[21] Yang Zhang,et al. A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations , 2018, ICML.
[22] S. Roth,et al. Lightweight Probabilistic Deep Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[23] Cengiz Öztireli,et al. Towards better understanding of gradient-based attribution methods for Deep Neural Networks , 2017, ICLR.
[24] Bengt von Bahr. On sampling from a finite set of independent random variables , 1972 .
[25] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[26] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[27] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[28] Ariel Rubinstein,et al. A Course in Game Theory , 1995 .
[29] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[30] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[31] Marcel van Gerven,et al. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges , 2018, ArXiv.
[32] Yi Sun,et al. Axiomatic attribution for multilinear functions , 2011, EC '11.
[33] L. Shapley,et al. Values of Non-Atomic Games , 1974 .
[34] H. Sebastian Seung,et al. The Rectified Gaussian Distribution , 1997, NIPS.
[35] Abubakar Abid,et al. Interpretation of Neural Networks is Fragile , 2017, AAAI.
[36] Brendan J. Frey,et al. Variational Learning in Nonlinear Gaussian Belief Networks , 1999, Neural Computation.
[37] Erik Strumbelj,et al. An Efficient Explanation of Individual Classifications using Game Theory , 2010, J. Mach. Learn. Res..
[38] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[39] E.T.A.F. Jacobs,et al. Gate sizing using a statistical delay model , 2000, Proceedings Design, Automation and Test in Europe Conference and Exhibition 2000 (Cat. No. PR00537).
[40] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[41] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[42] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[43] John R. Hershey,et al. Uncertainty propagation through deep neural networks , 2015, INTERSPEECH.
[44] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[45] Daniel Gómez,et al. Polynomial calculation of the Shapley value based on sampling , 2009, Comput. Oper. Res..