暂无分享,去创建一个
[1] Paul Voigt,et al. The EU General Data Protection Regulation (GDPR) , 2017 .
[2] Marie-Jeanne Lesot,et al. Comparison-Based Inverse Classification for Interpretability in Machine Learning , 2018, IPMU.
[3] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[4] Solon Barocas,et al. The hidden assumptions behind counterfactual explanations and principal reasons , 2019, FAT*.
[5] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[6] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[7] Osbert Bastani,et al. Interpretability via Model Extraction , 2017, ArXiv.
[8] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[9] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[10] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[11] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[12] Bernhard Schölkopf,et al. Algorithmic Recourse: from Counterfactual Explanations to Interventions , 2020, ArXiv.
[13] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[14] Janis Klaise,et al. Interpretable Counterfactual Explanations Guided by Prototypes , 2019, ECML/PKDD.
[15] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[16] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[17] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[18] Stephan Günnemann,et al. Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift , 2018, NeurIPS.
[19] Peter A. Flach,et al. FACE: Feasible and Actionable Counterfactual Explanations , 2020, AIES.
[20] Jure Leskovec,et al. Faithful and Customizable Explanations of Black Box Models , 2019, AIES.
[21] Bernhard Schölkopf,et al. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach , 2020, NeurIPS.
[22] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[23] Amir-Hossein Karimi,et al. Model-Agnostic Counterfactual Explanations for Consequential Decisions , 2019, AISTATS.