Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI
暂无分享,去创建一个
Mark O. Riedl | A. Riener | Q. Liao | Philipp Wintersberger | E. Watkins | Upol Ehsan | Carina Manger | Hal Daumé III | E. A. Watkins | Hal Daumé Iii
[1] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[2] Mark O. Riedl,et al. Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach , 2020, HCI.
[3] Philipp Wintersberger,et al. Operationalizing Human-Centered Perspectives in Explainable AI , 2021, CHI Extended Abstracts.
[4] R. Bellamy,et al. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers , 2021 .
[5] Mark R. Lehto,et al. Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .
[6] Rachel K. E. Bellamy,et al. Explaining models an empirical study of how explanations impact fairness judgment , 2019 .
[7] Yunfeng Zhang,et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making , 2020, FAT*.
[8] Cynthia Rudin,et al. The age of secrecy and unfairness in recidivism prediction , 2018, 2.1.
[9] Daniel S. Weld,et al. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML , 2020, CHI.
[10] M. C. Elish,et al. A Study of Integrating AI in Clinical Care , 2020 .
[11] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[12] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[13] R. Westrum. The Social Construction of Technological Systems , 1989 .
[14] Johanna D. Moore,et al. Explanation in Expert Systemss: A Survey , 1988 .
[15] Holger Hermanns,et al. What Do We Want From Explainable Artificial Intelligence (XAI)? - A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research , 2021, Artif. Intell..
[16] Alejandro Barredo Arrieta,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2019, Inf. Fusion.
[17] D. MacKenzie. Material Signals: A Historical Sociology of High-Frequency Trading1 , 2018, American Journal of Sociology.
[18] Mark O. Riedl,et al. The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations , 2021, ArXiv.
[19] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[20] Christian Biemann,et al. What do we need to build explainable AI systems for the medical domain? , 2017, ArXiv.
[21] Mark O. Riedl,et al. Expanding Explainability: Towards Social Transparency in AI systems , 2021, CHI.
[22] Bruce N. Walker,et al. Situational Trust Scale for Automated Driving (STS-AD): Development and Initial Validation , 2020, AutomotiveUI.
[23] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[24] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[25] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[26] M. Kaminski. The right to explanation, explained , 2018, Research Handbook on Information Law and Governance.
[27] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..