“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design
暂无分享,去创建一个
Dominik Schiller | Elisabeth André | Tobias Huber | Katharina Weitz | Ruben Schlagowski | E. André | Dominik Schiller | Katharina Weitz | Tobias Huber | R. Schlagowski | Ruben Schlagowski
[1] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[2] Florian Schiel,et al. Multilingual processing of speech via web services , 2017, Comput. Speech Lang..
[3] Björn W. Schuller,et al. Deep Learning for Environmentally Robust Speech Recognition , 2017, ACM Trans. Intell. Syst. Technol..
[4] John D. Lee,et al. Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.
[5] H. Chad Lane,et al. Explainable Artificial Intelligence for Training and Tutoring , 2005, AIED.
[6] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[7] John D. Lee,et al. Human-Automation Collaboration in Dynamic Mission Planning: A Challenge Requiring an Ecological Approach , 2006 .
[8] Giulio Costantini,et al. A Practical Primer To Power Analysis for Simple Experimental Designs , 2018 .
[9] R. Kennedy,et al. Defense Advanced Research Projects Agency (DARPA). Change 1 , 1996 .
[10] Elisabeth André,et al. An empirical study on the trustworthiness of life-like interface agents , 1999, HCI.
[11] Pete Warden,et al. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition , 2018, ArXiv.
[12] Mark R. Lehto,et al. Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .
[13] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[14] Tara N. Sainath,et al. Convolutional neural networks for small-footprint keyword spotting , 2015, INTERSPEECH.
[15] Lalana Kagal,et al. J un 2 01 8 Explaining Explanations : An Approach to Evaluating Interpretability of Machine Learning , 2018 .
[16] Xiaohui Peng,et al. Deep Learning for Sensor-based Activity Recognition: A Survey , 2017, Pattern Recognit. Lett..
[17] Catherine Pelachaud,et al. Automatic Nonverbal Behavior Generation from Image Schemas , 2018, AAMAS.
[18] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[19] Enrico Costanza,et al. Evaluating saliency map explanations for convolutional neural networks: a user study , 2020, IUI.
[20] Avi Rosenfeld,et al. A Survey of Interpretability and Explainability in Human-Agent Systems , 2018 .
[21] Stefan Scherer,et al. NADiA: Neural Network Driven Virtual Human Conversation Agents , 2018, IVA.
[22] Johannes Wagner,et al. Deep Learning in Paralinguistic Recognition Tasks: Are Hand-crafted Features Still Relevant? , 2018, INTERSPEECH.
[23] Klaus-Robert Müller,et al. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.
[24] Stacy Marsella,et al. How to Train Your Avatar: A Data Driven Approach to Gesture Generation , 2011, IVA.
[25] Koen V. Hindriks,et al. Do You Get It? User-Evaluated Explainable BDI Agents , 2010, MATES.
[26] Michael Siebers,et al. Please delete that! Why should I? , 2018, KI - Künstliche Intelligenz.
[27] M. Pickering,et al. Why is conversation so easy? , 2004, Trends in Cognitive Sciences.
[28] David R. Traum,et al. What would you Ask a conversational Agent? Observations of Human-Agent Dialogues in a Museum Setting , 2008, LREC.
[29] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[30] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[31] Elisabeth André,et al. "Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design , 2019, IVA.
[32] Trevor Darrell,et al. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[33] Michael A. Rupp,et al. Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.
[34] E. Thorndike. A constant error in psychological ratings. , 1920 .
[35] Ute Schmid,et al. Inductive Programming as Approach to Comprehensible Machine Learning , 2018, DKB/KIK@KI.
[36] Ute Schmid,et al. Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods , 2019, tm - Technisches Messen.
[37] Ryen W. White. Opportunities and challenges in search interaction , 2018, Commun. ACM.
[38] Elisabeth André,et al. The Persona Effect: How Substantial Is It? , 1998, BCS HCI.
[39] Pamela J. Hinds,et al. Autonomy and Common Ground in Human-Robot Interaction: A Field Study , 2007, IEEE Intelligent Systems.
[40] Michael W. Boyce,et al. Situation Awareness-Based Agent Transparency , 2014 .
[41] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[42] Maartje M. A. de Graaf,et al. How People Explain Action (and Autonomous Intelligent Systems Should Too) , 2017, AAAI Fall Symposia.
[43] Albert Gatt,et al. Learning when to point: A data-driven approach , 2014, COLING.
[44] Tim Miller,et al. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.
[45] 智一 吉田,et al. Efficient Graph-Based Image Segmentationを用いた圃場図自動作成手法の検討 , 2014 .