暂无分享,去创建一个
Mark O. Riedl | Q. Vera Liao | Upol Ehsan | Samir Passi | Larry Chan | Michael Muller | Michael J. Muller | I-Hsiang Lee | Q. Liao | Upol Ehsan | Samir Passi | I-Hsiang Lee | Larry Chan
[1] Siddhartha S. Srinivasa,et al. Gracefully mitigating breakdowns in robotic services , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[2] Haiyi Zhu,et al. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders , 2019, CHI.
[3] P. McCullagh. Regression Models for Ordinal Data , 1980 .
[4] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[5] Fred D. Davis,et al. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models , 1989 .
[6] Xiaogang Wang,et al. Residual Attention Network for Image Classification , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Antti Salovaara,et al. Inventing new uses for tools: A cognitive foundation for studies on appropriation , 2008 .
[8] Carrie J. Cai,et al. The effects of example-based explanations in a machine learning interface , 2019, IUI.
[9] Jure Leskovec,et al. Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.
[10] Andy J. King,et al. Utilization of Internet Technology by Low-Income Adults , 2010, Journal of aging and health.
[11] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[12] D. MacKenzie. Material Signals: A Historical Sociology of High-Frequency Trading1 , 2018, American Journal of Sociology.
[13] John Bowers,et al. The logic of annotated portfolios: communicating the value of 'research through design' , 2012, DIS '12.
[14] L. Tickle-Degnen,et al. The Nature of Rapport and Its Nonverbal Correlates , 1990 .
[15] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[16] Mark O. Riedl,et al. Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach , 2020, HCI.
[17] Weng-Keen Wong,et al. Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.
[18] Robert Chen,et al. Machine Learning Model Interpretability for Precision Medicine , 2016, 1610.09045.
[19] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[20] P. C. Wason,et al. Dual processes in reasoning? , 1975, Cognition.
[21] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[22] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[23] Brian Y. Lim,et al. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations , 2020, CHI.
[24] Milagros Miceli,et al. Between Subjectivity and Imposition , 2020, Proc. ACM Hum. Comput. Interact..
[25] Stefan Kopp,et al. Effects of a Social Robot's Self-Explanations on How Humans Understand and Evaluate Its Behavior , 2020, HRI.
[26] Tara S. Behrend,et al. The viability of crowdsourcing for survey research , 2011, Behavior research methods.
[27] Michael Chromik,et al. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI , 2021, IUI.
[28] Ohbyung Kwon,et al. Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products , 2020, Telematics Informatics.
[29] W. Lewis Johnson,et al. Agents that Learn to Explain Themselves , 1994, AAAI.
[30] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[31] Daniel Buschek,et al. How to Support Users in Understanding Intelligent Systems? Structuring the Discussion , 2020, IUI.
[32] P. Lipton. What Good is an Explanation , 2001 .
[33] Andrea A. diSessa,et al. Changing Minds: Computers, Learning, and Literacy , 2000 .
[34] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[35] Christian Biemann,et al. What do we need to build explainable AI systems for the medical domain? , 2017, ArXiv.
[36] Rebecca Gray,et al. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed , 2015, CHI.
[37] Anca D. Dragan,et al. Expressing Robot Incapability , 2018, 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[38] Lalana Kagal,et al. J un 2 01 8 Explaining Explanations : An Approach to Evaluating Interpretability of Machine Learning , 2018 .
[39] Phoebe Sengers,et al. Reflective design , 2005, Critical Computing.
[40] N. McGlynn. Thinking fast and slow. , 2014, Australian veterinary journal.
[41] Pat Croskerry,et al. Cognitive forcing strategies in clinical decisionmaking. , 2003, Annals of emergency medicine.
[42] Richard A. Berk,et al. Overview of: “Statistical Procedures for Forecasting Criminal Behavior: A Comparative Assessment” , 2013 .
[43] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[44] Jonathan Robinson,et al. TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences , 2016, Behavior Research Methods.
[45] R. Bellamy,et al. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers , 2021 .
[46] Brian Magerko,et al. What is AI Literacy? Competencies and Design Considerations , 2020, CHI.
[47] Rachel K. E. Bellamy,et al. Explaining models an empirical study of how explanations impact fairness judgment , 2019 .
[48] T. Lombrozo. Explanatory Preferences Shape Learning and Inference , 2016, Trends in Cognitive Sciences.
[49] William R. Swartout,et al. XPLAIN: A System for Creating and Explaining Expert Consulting Programs , 1983, Artif. Intell..
[50] Mark O. Riedl,et al. Expanding Explainability: Towards Social Transparency in AI systems , 2021, CHI.
[51] Edward H. Shortliffe,et al. Computer-based medical consultations, MYCIN , 1976 .
[52] Takayuki Kanda,et al. Modeling and Controlling Friendliness for An Interactive Museum Robot , 2014, Robotics: Science and Systems.
[53] Sheng Wu,et al. The integration of value-based adoption and expectation-confirmation models: An example of IPTV continuance intention , 2012, Decis. Support Syst..
[54] Harmanpreet Kaur,et al. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning , 2020, CHI.
[55] Brian Scassellati,et al. No fair!! An interaction with a cheating robot , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[56] Sang M. Lee,et al. An exploratory cognitive DSS for strategic decision making , 2003, Decis. Support Syst..
[57] Manfred Tscheligi,et al. Collaborative Appropriation: How Couples, Teams, Groups and Communities Adapt and Adopt Technologies , 2016, CSCW Companion.
[58] Frank Bentley,et al. Comparing the Reliability of Amazon Mechanical Turk and Survey Monkey to Traditional Market Research Surveys , 2017, CHI Extended Abstracts.
[59] Clifford Nass,et al. Machines, social attributions, and ethopoeia: performance assessments of computers subsequent to "self-" or "other-" evaluations , 1994, Int. J. Hum. Comput. Stud..
[60] N. Sadat Shami,et al. What Can You Do?: Studying Social-Agent Orientation and Agent Proactive Interactions with an Agent for Employees , 2016, Conference on Designing Interactive Systems.
[61] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[62] David J. Hauser,et al. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants , 2015, Behavior Research Methods.
[63] Jichen Zhu,et al. The Impact of User Characteristics and Preferences on Performance with an Unfamiliar Voice User Interface , 2019, CHI.
[64] K. Holyoak,et al. The Oxford handbook of thinking and reasoning , 2012 .
[65] EunJeong Cheon,et al. Configuring the User: "Robots have Needs Too" , 2017, CSCW.
[66] Yasaman Khazaeni,et al. All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild , 2018, CHI 2018.
[67] Sidney S. Fels,et al. Adoption and Appropriation: A Design Process from HCI Research at a Brazilian Neurological Hospital , 2013, INTERACT.
[68] Jiebo Luo,et al. Image Captioning with Semantic Attention , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[69] Holly A. Yanco,et al. Impact of robot failures and feedback on real-time trust , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[70] Tania Lombrozo,et al. Explanation and inference: mechanistic and functional explanations guide property generalization , 2014, Front. Hum. Neurosci..
[71] Haiyi Zhu,et al. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences , 2020, CHI.
[72] Alan Agresti,et al. Categorical Data Analysis , 2003 .
[73] Sonia Chernova,et al. Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery , 2021, 2021 16th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[74] Michael van Lent,et al. An Explainable Artificial Intelligence System for Small-unit Tactical Behavior , 2004, AAAI.
[75] P S BaumerEric,et al. Who is the "Human" in Human-Centered Machine Learning , 2019 .
[76] A. Strauss,et al. The discovery of grounded theory: strategies for qualitative research aldine de gruyter , 1968 .
[77] Avi Rosenfeld,et al. Explainability in human–agent systems , 2019, Autonomous Agents and Multi-Agent Systems.
[78] Aniket Kittur,et al. Crowdsourcing user studies with Mechanical Turk , 2008, CHI.
[79] Mark O. Riedl,et al. Increasing Replayability with Deliberative and Reactive Planning , 2005, AIIDE.
[80] John T. Cacioppo,et al. The Elaboration Likelihood Model of Persuasion , 1986, Advances in Experimental Social Psychology.
[81] J. Jaccard. Interaction effects in logistic regression , 2001 .
[82] Vibhav Gogate,et al. Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems , 2021, IUI.
[83] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[84] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[85] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.
[86] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[87] Kimmo Eriksson. The nonsense math effect , 2012, Judgment and Decision Making.
[88] Yan Liu,et al. Interpretable Deep Models for ICU Outcome Prediction , 2016, AMIA.
[89] Steven J. Jackson,et al. Data Vision: Learning to See Through Algorithmic Abstraction , 2017, CSCW.
[90] Krzysztof Z. Gajos,et al. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems , 2020, IUI.
[91] Johanna D. Moore,et al. Explanation in Expert Systemss: A Survey , 1988 .
[92] K. Weick. FROM SENSEMAKING IN ORGANIZATIONS , 2021, The New Economic Sociology.
[93] Phoebe Sengers,et al. Making data science systems work , 2020, Big Data Soc..
[94] Wiebe E. Bijker,et al. Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change , 1995 .
[95] Sanjeeb Dash,et al. Boolean Decision Rules via Column Generation , 2018, NeurIPS.
[96] Sonia Chernova,et al. Leveraging rationales to improve human task performance , 2020, IUI.
[97] Ass,et al. Can computers be teammates? , 1996 .
[98] D. Bawden. Origins and Concepts of Digital Literacy , 2008 .
[99] Manfred Tscheligi,et al. Potentials of the "Unexpected": Technology Appropriation Practices and Communication Needs , 2014, GROUP.
[100] Timothy W. Bickmore,et al. Towards caring machines , 2004, CHI EA '04.
[101] C. Nass,et al. Machines and Mindlessness , 2000 .
[102] Gordon B. Davis,et al. User Acceptance of Information Technology: Toward a Unified View , 2003, MIS Q..
[103] Jodi Forlizzi,et al. Receptionist or information kiosk: how do people talk with a robot? , 2010, CSCW '10.
[104] Sanjeeb Dash,et al. Generalized Linear Rule Models , 2019, ICML.
[105] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[106] H. Chad Lane,et al. Building Explainable Artificial Intelligence Systems , 2006, AAAI.
[107] Michael Veale,et al. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data , 2017, Big Data Soc..
[108] Nicu Sebe,et al. Guest Editors' Introduction: Human-Centered Computing--Toward a Human Revolution , 2007, Computer.
[109] Mary L. Gray,et al. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass , 2019 .
[110] Gary O'Reilly,et al. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review , 2016, BMJ Quality & Safety.
[111] Solon Barocas,et al. Problem Formulation and Fairness , 2019, FAT.
[112] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[113] Daniel A. Wilkenfeld,et al. Mechanistic versus Functional Understanding , 2019, Varieties of Understanding.
[114] Kurt Hornik,et al. Implementing a Class of Permutation Tests: The coin Package , 2008 .
[115] Colin Lankshear,et al. Introduction: digital literacies: concepts, policies and practices , 2008 .
[116] T. Koda,et al. Agents with faces: the effect of personification , 1996, Proceedings 5th IEEE International Workshop on Robot and Human Communication. RO-MAN'96 TSUKUBA.
[117] K. Karahalios,et al. "I always assumed that I wasn't really that close to [her]": Reasoning about Invisible Algorithms in News Feeds , 2015, CHI.
[118] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[119] Peter Dayan,et al. Q-learning , 1992, Machine Learning.
[120] Andrew D. Selbst,et al. Big Data's Disparate Impact , 2016 .
[121] Joseph Goodman,et al. Crowdsourcing Consumer Research , 2017 .
[122] Mark O. Riedl,et al. Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations , 2017, AIES.
[123] Alan J. Dix,et al. Designing for appropriation , 2007, BCS HCI.
[124] Iason Gabriel,et al. Artificial Intelligence, Values, and Alignment , 2020, Minds and Machines.
[125] Daniel S. Weld,et al. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML , 2020, CHI.
[126] M. Six Silberman,et al. From critical design to critical infrastructure: lessons from turkopticon , 2014, INTR.
[127] Jon Sprouse. A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory , 2010, Behavior research methods.
[128] Heinrich Hußmann,et al. The Impact of Placebic Explanations on Trust in Intelligent Systems , 2019, CHI Extended Abstracts.
[129] T. Lombrozo. The Instrumental Value of Explanations , 2011 .
[130] D. Gromala,et al. Bridging AI Developers and End Users: an End-User-Centred Explainable AI Taxonomy and Visual Vocabularies , 2019 .
[131] Alun D. Preece,et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.
[132] Rob Kling,et al. Human centered systems in the perspective of organizational and social informatics , 1998, CSOC.
[133] Dirk Heylen,et al. First Impressions: Users' Judgments of Virtual Agents' Personality and Interpersonal Attitude in First Encounters , 2012, IVA.
[134] Cynthia Rudin,et al. The age of secrecy and unfairness in recidivism prediction , 2018, 2.1.
[135] Michael J. Muller,et al. Human-Centered Study of Data Science Work Practices , 2019, CHI Extended Abstracts.
[136] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[137] Enrico Costanza,et al. Evaluating saliency map explanations for convolutional neural networks: a user study , 2020, IUI.
[138] Parisa Rashidi,et al. Artificial Intelligence and Surgical Decision-Making. , 2019, JAMA surgery.
[139] Antti Salovaara,et al. Acceptance or Appropriation? A Design-Oriented Critique of Technology Acceptance Models , 2009 .
[140] Colin M. Gray,et al. The Dark (Patterns) Side of UX Design , 2018, CHI.
[141] T. Lombrozo,et al. Inference to the Best Explanation (IBE) Versus Explaining for the Best Inference (EBI) , 2015, Science Education.
[142] Yunyao Li,et al. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle , 2021, Conference on Designing Interactive Systems.
[143] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[144] Manfred Tscheligi,et al. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot , 2017, Front. Robot. AI.
[145] B. Asher. The Professional Vision , 1994 .