How can I choose an explainer?: An Application-grounded Evaluation of Post-hoc Explanations
暂无分享,去创建一个
Pedro Saleiro | Pedro Bizarro | Joao Bento | S'ergio Jesus | Catarina Bel'em | Vladimir Balayan | Joao Gama | Sérgio M. Jesus | P. Bizarro | Pedro Saleiro | Catarina Bel'em | Vladimir Balayan | João Gama | João Bento | Sérgio Jesus
[1] Bogdan E. Popescu,et al. PREDICTIVE LEARNING VIA RULE ENSEMBLES , 2008, 0811.1679.
[2] Jacob Cohen,et al. The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability , 1973 .
[3] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[4] Abubakar Abid,et al. Interpretation of Neural Networks is Fragile , 2017, AAAI.
[5] W. Kruskal,et al. Use of Ranks in One-Criterion Variance Analysis , 1952 .
[6] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[7] Jie Chen,et al. Explainable Neural Networks based on Additive Index Models , 2018, ArXiv.
[8] Scott M. Lundberg,et al. Consistent Individualized Feature Attribution for Tree Ensembles , 2018, ArXiv.
[9] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[10] Tommi S. Jaakkola,et al. On the Robustness of Interpretability Methods , 2018, ArXiv.
[11] Vivian Lai,et al. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection , 2018, FAT.
[12] Zachary C. Lipton,et al. The Doctor Just Won't Accept That! , 2017, 1711.08037.
[13] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[14] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[15] Mykola Pechenizkiy,et al. Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models , 2019, ArXiv.
[16] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[17] Karl Pearson F.R.S.. X. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling , 2009 .
[18] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[19] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[20] AN Kolmogorov-Smirnov,et al. Sulla determinazione empírica di uma legge di distribuzione , 1933 .
[21] Alun D. Preece,et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.
[22] H. B. Mann,et al. On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other , 1947 .
[23] Tim Miller,et al. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.
[24] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[25] Hugh Chen,et al. From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.
[26] Rich Caruana,et al. Axiomatic Interpretability for Multiclass Additive Models , 2018, KDD.
[27] Cynthia Rudin,et al. Globally-Consistent Rule-Based Summary-Explanations for Machine Learning Models: Application to Credit-Risk Evaluation , 2019, J. Mach. Learn. Res..
[29] Shi Feng,et al. What can AI do for me?: evaluating machine learning interpretations in cooperative play , 2019, IUI.
[30] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[31] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[32] Sameer Singh,et al. Towards Extracting Faithful and Descriptive Representations of Latent Variable Models , 2015, AAAI Spring Symposia.
[33] K. Pearson. On the Criterion that a Given System of Deviations from the Probable in the Case of a Correlated System of Variables is Such that it Can be Reasonably Supposed to have Arisen from Random Sampling , 1900 .
[34] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[35] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[36] Wojciech Kotlowski,et al. Maximum likelihood rule ensembles , 2008, ICML '08.
[37] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[38] Denali Molitor,et al. Model Agnostic Supervised Local Explanations , 2018, NeurIPS.
[39] S. Holm. A Simple Sequentially Rejective Multiple Test Procedure , 1979 .
[40] Sameer Singh,et al. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.
[41] Timnit Gebru,et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.
[42] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[43] Felix Bießmann,et al. Quantifying Interpretability and Trust in Machine Learning Systems , 2019, ArXiv.
[44] Yoram Singer,et al. A simple, fast, and effective rule learner , 1999, AAAI 1999.