暂无分享,去创建一个
Ankur Taly | Maarten de Rijke | Ana Lucic | Q. Vera Liao | Alice Xiang | Umang Bhatt | Madhulika Srikumar
[1] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[2] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[3] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[4] Alice Xiang,et al. Machine Learning Explainability for External Stakeholders , 2020, ArXiv.
[5] Timnit Gebru,et al. Datasheets for datasets , 2018, Commun. ACM.
[6] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[7] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[8] Ankur Taly,et al. Explainable machine learning in deployment , 2020, FAT*.
[9] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[10] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[11] Harmanpreet Kaur,et al. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning , 2020, CHI.
[12] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[13] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[14] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[15] A. Chouldechova,et al. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services , 2019, CHI.
[16] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..