Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces
暂无分享,去创建一个
[1] Jürgen Ziegler,et al. Explaining recommendations by means of aspect-based transparent memories , 2020, IUI.
[2] Li Chen,et al. Trust building with explanation interfaces , 2006, IUI '06.
[3] Yusuke Sugano,et al. Investigating audio data visualization for interactive sound recognition , 2020, IUI.
[4] Alfred Kobsa,et al. Inspectability and control in social recommenders , 2012, RecSys.
[5] Patrick Gebhard,et al. PARLEY: a transparent virtual social agent training interface , 2019, IUI Companion.
[6] Justin D. Weisz,et al. BigBlueBot: teaching strategies for successful human-agent interactions , 2019, IUI.
[7] Ayan Banerjee,et al. On evaluating the effects of feedback for Sign language learning using Explainable AI , 2020, IUI Companion.
[8] Carrie J. Cai,et al. The effects of example-based explanations in a machine learning interface , 2019, IUI.
[9] Vishwa Shah,et al. Friend, Collaborator, Student, Manager: How Design of an AI-Driven Game Level Editor Affects Creators , 2019, CHI.
[10] Adam Roegiest,et al. Dancing with the AI Devil: Investigating the Partnership Between Lawyers and AI , 2020, CHIIR.
[11] Zhiwei Steven Wu,et al. Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives , 2019, Conference on Designing Interactive Systems.
[12] Nava Tintarev,et al. Explanations of recommendations , 2007, RecSys '07.
[13] Margaret M. Burnett,et al. How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games , 2017, CHI.
[14] L. Longo,et al. Explainable Artificial Intelligence: a Systematic Review , 2020, ArXiv.
[15] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[16] Virpi Roto,et al. Understanding, scoping and defining user experience: a survey approach , 2009, CHI.
[17] Heinrich Hußmann,et al. The Impact of Placebic Explanations on Trust in Intelligent Systems , 2019, CHI Extended Abstracts.
[18] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[19] Donald A. Norman,et al. User Centered System Design: New Perspectives on Human-Computer Interaction , 1988 .
[20] Matthew E. Taylor,et al. Towers of Saliency: A Reinforcement Learning Visualization Using Immersive Environments , 2019, ISS.
[21] Elizabeth Sklar,et al. Explanation through Argumentation , 2018, HAI.
[22] Li Chen,et al. Adaptive tradeoff explanations in conversational recommenders , 2009, RecSys '09.
[23] Helen F. Hastie,et al. MIRIAM: A Multimodal Interface for Explaining the Reasoning Behind Actions of Remote Autonomous Systems , 2018, ICMI.
[24] Eric D. Ragan,et al. Investigating the Importance of First Impressions and Explainable AI with Interactive Video Analysis , 2020, CHI Extended Abstracts.
[25] Martijn Millecamp,et al. To explain or not to explain: the effects of personal characteristics when explaining music recommendations , 2019, IUI.
[26] R. Aharonov,et al. A Survey of the State of Explainable AI for Natural Language Processing , 2020, AACL.
[27] Kenney Ng,et al. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models , 2016, CHI.
[28] Markus Zanker. The influence of knowledgeable explanations on users' perception of a recommender system , 2012, RecSys '12.
[29] Andrés Lucero,et al. May AI?: Design Ideation with Cooperative Contextual Bandits , 2019, CHI.
[30] Rachel K. E. Bellamy,et al. Explaining models an empirical study of how explanations impact fairness judgment , 2019 .
[31] Christine T. Wolf. Explainability scenarios: towards scenario-based XAI design , 2019, IUI.
[32] Pearl Brereton,et al. Performing systematic literature reviews in software engineering , 2006, ICSE.
[33] John Riedl,et al. Explaining collaborative filtering recommendations , 2000, CSCW '00.
[34] Helen F. Hastie,et al. Exploring Interaction with Remote Autonomous Systems using Conversational Agents , 2019, Conference on Designing Interactive Systems.
[35] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[36] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[37] Eric Horvitz,et al. Principles of mixed-initiative user interfaces , 1999, CHI '99.
[38] Shi Feng,et al. What can AI do for me?: evaluating machine learning interpretations in cooperative play , 2019, IUI.
[39] Andreas Schreiber,et al. Visualization of neural networks in virtual reality using Unreal Engine , 2018, VRST.
[40] Ben Shneiderman,et al. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People , 2020, CHI Extended Abstracts.
[41] Andrew Lim,et al. Ambiguity-aware AI Assistants for Medical Data Analysis , 2020, CHI.
[42] Johanna D. Moore,et al. Requirements for an expert system explanation facility , 1991 .
[43] Enrico Costanza,et al. Evaluating saliency map explanations for convolutional neural networks: a user study , 2020, IUI.
[44] Ben Shneiderman,et al. Bridging the Gap Between Ethics and Practice , 2020, ACM Trans. Interact. Intell. Syst..
[45] Alan Cooper,et al. About Face 3: the essentials of interaction design , 1995 .
[46] Sang Joon Kim,et al. A Mathematical Theory of Communication , 2006 .
[47] Juliana Jansen Ferreira,et al. What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice , 2020, HCI.
[48] H. Simon. Models of Bounded Rationality: Empirically Grounded Economic Reason , 1997 .
[49] Martin Schuessler,et al. Minimalistic Explanations: Capturing the Essence of Decisions , 2019, CHI Extended Abstracts.
[50] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[51] P. C. Malshe. What is this interaction? , 1994, The Journal of the Association of Physicians of India.
[52] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.
[53] Pasquale Lops,et al. Justifying Recommendations through Aspect-based Sentiment Analysis of Users Reviews , 2019, UMAP.
[54] Bradley Hayes,et al. Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning , 2019, 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[55] Tobias Höllerer,et al. TasteWeights: a visual interactive hybrid recommender system , 2012, RecSys.
[56] Barry Smyth,et al. A Live-User Study of Opinionated Explanations for Recommender Systems , 2016, IUI.
[57] A. Butz,et al. Mind the (persuasion) gap: contrasting predictions of intelligent DSS with user beliefs to improve interpretability , 2020, EICS.
[58] Maneesh Agrawala,et al. Answering Questions about Charts and Generating Visual Explanations , 2020, CHI.
[59] Subbarao Kambhampati,et al. Plan Explanations as Model Reconciliation - An Empirical Study , 2018, ArXiv.
[60] Andrés Páez,et al. The Pragmatic Turn in Explainable Artificial Intelligence (XAI) , 2019, Minds and Machines.
[61] Paul Coulton,et al. The Process of Gaining an AI Legibility Mark , 2020, CHI Extended Abstracts.
[62] Peter Brusilovsky,et al. Explaining educational recommendations through a concept-level knowledge visualization , 2019, IUI Companion.
[63] Bipin Indurkhya,et al. Persona Prototypes for Improving the Qualitative Evaluation of Recommendation Systems , 2020, UMAP.
[64] Barry Smyth,et al. PeerChooser: visual interactive recommendation , 2008, CHI.
[65] Shlomo Berkovsky,et al. Revisiting Habitability in Conversational Systems , 2020, CHI Extended Abstracts.
[66] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[67] Paul N. Bennett,et al. Guidelines for Human-AI Interaction , 2019, CHI.
[68] Per Ola Kristensson,et al. A Review of User Interface Design for Interactive Machine Learning , 2018, ACM Trans. Interact. Intell. Syst..
[69] Andrea Bunt,et al. Are explanations always important?: a study of deployed, low-cost intelligent interactive systems , 2012, IUI '12.
[70] Haiyi Zhu,et al. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders , 2019, CHI.
[71] Sonia Chernova,et al. Leveraging rationales to improve human task performance , 2020, IUI.
[72] Ming Yin,et al. Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.
[73] Daniel Jurafsky,et al. Word embeddings quantify 100 years of gender and ethnic stereotypes , 2017, Proceedings of the National Academy of Sciences.
[74] John Riedl,et al. Tagsplanations: explaining recommendations using tags , 2009, IUI.
[75] Chris North,et al. With respect to what?: simultaneous interaction with dimension reduction and clustering projections , 2020, IUI.
[76] Jichen Zhu,et al. Interactive Visualizer to Facilitate Game Designers in Understanding Machine Learning , 2019, CHI Extended Abstracts.
[77] Aaron Springer,et al. Progressive Disclosure , 2020, ACM Trans. Interact. Intell. Syst..
[78] Li Chen,et al. Explaining Recommendations Based on Feature Sentiments in Product Reviews , 2017, IUI.
[79] Amit Dhurandhar,et al. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques , 2019, ArXiv.
[80] Heinrich Hußmann,et al. I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars , 2019, CHI Extended Abstracts.
[81] Ivania Donoso-Guzmán,et al. The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images , 2019, IUI.
[82] Sarit Kraus,et al. Providing explanations for recommendations in reciprocal environments , 2018, RecSys.
[83] Peter Brusilovsky,et al. Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance , 2019, UMAP.
[84] Monica M. C. Schraefel,et al. Introducing Peripheral Awareness as a Neurological State for Human-computer Integration , 2020, CHI.
[85] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[86] Ning Wang,et al. Trust calibration within a human-robot team: Comparing automatically generated explanations , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[87] Jean Scholtz,et al. How do visual explanations foster end users' appropriate trust in machine learning? , 2020, IUI.
[88] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[89] Sotiris Kotsiantis,et al. Explainable AI: A Review of Machine Learning Interpretability Methods , 2020, Entropy.
[90] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[91] Qian Yang,et al. Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.
[92] Eric E. Geiselman,et al. Intelligent pairing assistant for air operation centers , 2012, IUI '12.
[93] Antti Oulasvirta,et al. HCI Research as Problem-Solving , 2016, CHI.
[94] Anind K. Dey,et al. Weights of evidence for intelligible smart environments , 2012, UbiComp '12.