Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle
暂无分享,去创建一个
Yunyao Li | Lucian Popa | Christine T. Wolf | Anbang Xu | Shipi Dhanorkar | Kun Qian | Lucian Popa | Anbang Xu | Yunyao Li | Kun Qian | Shipi Dhanorkar
[1] John Zimmerman,et al. Mapping Machine Learning Advances from HCI Research to Reveal Starting Places for Design Innovation , 2018, CHI.
[2] Elisa Giaccardi,et al. Designing and Prototyping from the Perspective of AI in the Wild , 2019, Conference on Designing Interactive Systems.
[3] Sungsoo Ray Hong,et al. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs , 2020, Proc. ACM Hum. Comput. Interact..
[4] Alfred Gell,et al. Technology and Magic , 1988 .
[5] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[6] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[7] Le Minh Nguyen,et al. Text analytics in industry: Challenges, desiderata and trends , 2016, Comput. Ind..
[8] Kun Qian,et al. A Survey of the State of Explainable AI for Natural Language Processing , 2020, AACL/IJCNLP.
[9] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[10] Haiyi Zhu,et al. The Changing Contours of "Participation" in Data-driven, Algorithmic Ecosystems: Challenges, Tactics, and an Agenda , 2018, CSCW Companion.
[11] H. Chad Lane,et al. Building Explainable Artificial Intelligence Systems , 2006, AAAI.
[12] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[13] Judith S. Olson,et al. Ways of Knowing in HCI , 2014, Springer New York.
[14] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[15] Elisa Giaccardi,et al. Encountering ethics through design: a workshop with nonhuman participants , 2020, AI & SOCIETY.
[16] Clay Spinuzzi,et al. Context and consciousness: Activity theory and human-computer interaction , 1997 .
[17] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[18] Mark S Handcock,et al. 7. Respondent-Driven Sampling: An Assessment of Current Methodology , 2009, Sociological methodology.
[19] Mark O. Riedl,et al. Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach , 2020, HCI.
[20] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[21] HoltzblattKaren,et al. Rapid Contextual Design , 2005 .
[22] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[23] Arne Berger,et al. Machine Learning Uncertainty as a Design Material: A Post-Phenomenological Inquiry , 2021, CHI.
[24] Lucy A. Suchman,et al. Plans and Situated Actions: The Problem of Human-Machine Communication (Learning in Doing: Social, , 1987 .
[25] Qian Yang,et al. Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.
[26] William R. Swartout,et al. XPLAIN: A System for Creating and Explaining Expert Consulting Programs , 1983, Artif. Intell..
[27] Lucy Suchman,et al. Human-Machine Reconfigurations: Plans and Situated Actions , 2006 .
[28] Lauren Wilcox,et al. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making , 2019, Proc. ACM Hum. Comput. Interact..
[29] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[30] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[31] Johanna Ylipulli,et al. Artificial Intelligence and Risk in Design , 2020, Conference on Designing Interactive Systems.
[32] Rebecca Gray,et al. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed , 2015, CHI.
[33] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[34] Michael Veale,et al. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making , 2018, CHI.
[35] Zhiwei Steven Wu,et al. Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives , 2019, Conference on Designing Interactive Systems.
[36] Ming Yin,et al. Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.
[37] John Zimmerman,et al. Investigating How Experienced UX Designers Effectively Work with Machine Learning , 2018, Conference on Designing Interactive Systems.
[38] Mohit Bansal,et al. Interpreting Neural Networks to Improve Politeness Comprehension , 2016, EMNLP.
[39] Taina Bucher,et al. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms , 2017, The Social Power of Algorithms.
[40] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[41] W. Lewis Johnson,et al. Agents that Learn to Explain Themselves , 1994, AAAI.
[42] Graham Dove,et al. Monsters, Metaphors, and Machine Learning , 2020, CHI.
[43] Brian Magerko,et al. What is AI Literacy? Competencies and Design Considerations , 2020, CHI.
[44] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[45] Eric Gilbert,et al. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms , 2019, CHI.
[46] Venerating the Black Box: Magic in Media Discourse on Technology , 1995 .
[48] Steve Whittaker,et al. Progressive disclosure: empirically motivated approaches to designing effective transparency , 2019, IUI.
[49] John Riedl,et al. Explaining collaborative filtering recommendations , 2000, CSCW '00.
[50] Jenna Burrell,et al. How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .
[51] Xinlei Chen,et al. Visualizing and Understanding Neural Models in NLP , 2015, NAACL.
[52] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[53] Brian Y. Lim,et al. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations , 2020, CHI.
[54] Christine T. Wolf. Explainability scenarios: towards scenario-based XAI design , 2019, IUI.