Evaluation of Post-hoc XAI Approaches Through Synthetic Tabular Data
暂无分享,去创建一个
Andreas Hotho | Daniel Schlör | Lena Hettinger | Markus Ring | Julian Tritscher | A. Hotho | Markus Ring | Lena Hettinger | Julian Tritscher | Daniel Schlör
[1] Daniel Gómez,et al. Polynomial calculation of the Shapley value based on sampling , 2009, Comput. Oper. Res..
[2] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[3] Markus H. Gross,et al. A unified view of gradient-based attribution methods for Deep Neural Networks , 2017, NIPS 2017.
[4] Bruno R. Preiss,et al. Data Structures and Algorithms with Object-Oriented Design Patterns in Java , 1999 .
[5] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[6] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[7] Markus H. Gross,et al. Gradient-Based Attribution Methods , 2019, Explainable AI.
[8] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[9] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[10] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[11] T. Sanders,et al. Analysis of Boolean Functions , 2012, ArXiv.
[12] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[13] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[14] Andreas Hotho,et al. Flow-based Network Traffic Generation using Generative Adversarial Networks , 2018, Comput. Secur..
[15] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.