暂无分享,去创建一个
[1] William J. Clancey,et al. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI , 2019, ArXiv.
[2] Christian Werner. Explainable AI through Rule-based Interactive Conversation , 2020, EDBT/ICDT Workshops.
[3] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[4] Przemyslaw Biecek,et al. archivist: An R Package for Managing, Recording and Restoring Data Analysis Results , 2017, 1706.08822.
[5] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[6] Peter A. Flach,et al. Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant , 2018, IJCAI.
[7] Amit Dhurandhar,et al. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques , 2019, ArXiv.
[8] Bernd Bischl,et al. Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability , 2019, PKDD/ECML Workshops.
[9] Tim Miller,et al. A Grounded Interaction Protocol for Explainable Artificial Intelligence , 2019, AAMAS.
[10] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[11] Sameer Singh,et al. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.
[12] Peter A. Flach,et al. Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements , 2018, IJCAI.
[13] Nico Hochgeschwender,et al. Conversational Interfaces for Explainable AI: A Human-Centred Approach , 2019, EXTRAAMAS@AAMAS.
[14] Justine Cassell,et al. A Model of Social Explanations for a Conversational Movie Recommendation System , 2019, HAI.
[15] Sebastian Gehrmann,et al. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models , 2019, ArXiv.
[16] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[17] Tim Miller,et al. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.
[18] Przemyslaw Biecek,et al. modelStudio: Interactive Studio with Explanations for ML Predictive Models , 2019, J. Open Source Softw..
[19] Lalana Kagal,et al. J un 2 01 8 Explaining Explanations : An Approach to Evaluating Interpretability of Machine Learning , 2018 .
[20] Przemyslaw Biecek,et al. pyCeterisParibus: explaining Machine Learning models with Ceteris Paribus Profiles in Python , 2019, J. Open Source Softw..
[21] Yujia Zhang,et al. Why should you trust my interpretation? Understanding uncertainty in LIME predictions , 2019, ArXiv.
[22] N. Cristianini,et al. Machine Decisions and Human Consequences , 2018, Algorithmic Regulation.
[23] Tim Miller,et al. Towards a Grounded Dialog Model for Explainable Artificial Intelligence , 2018, ArXiv.
[24] Mireia Ribera,et al. Can we do better explanations? A proposal of user-centered explainable AI , 2019, IUI Workshops.
[25] Alun D. Preece,et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.
[26] Przemyslaw Biecek,et al. DALEX: explainers for complex predictive models , 2018, J. Mach. Learn. Res..
[27] Rich Caruana,et al. InterpretML: A Unified Framework for Machine Learning Interpretability , 2019, ArXiv.
[28] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[29] Peter A. Flach,et al. One Explanation Does Not Fit All , 2020, KI - Künstliche Intelligenz.
[30] Bernd Bischl,et al. Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition , 2019, ArXiv.