Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
暂无分享,去创建一个
[1] Tom B. Brown,et al. Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned , 2022, ArXiv.
[2] Huan Yee Koh,et al. An Empirical Survey on Long Document Summarization: Datasets, Models, and Metrics , 2022, ACM Comput. Surv..
[3] Finale Doshi-Velez,et al. Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI , 2022, HCOMP.
[4] Lisa Anne Hendricks,et al. Taxonomy of Risks posed by Language Models , 2022, FAccT.
[5] Hanna M. Wallach,et al. Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and Desiderata , 2022, Proc. ACM Hum. Comput. Interact..
[6] Zhiwei Steven Wu,et al. Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits , 2022, FAccT.
[7] Cliff Lampe,et al. Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory , 2022, FAccT.
[8] Jesse Vig,et al. Interactive Model Cards: A Human-Centered Approach to Model Documentation , 2022, FAccT.
[9] Eytan Adar,et al. Solving Separation-of-Concerns Problems in Collaborative Design of Human-AI Systems through Leaky Abstractions , 2022, CHI.
[10] J. Forlizzi,et al. How Experienced Designers of Enterprise Applications Engage AI as a Design Material , 2022, CHI.
[11] Sabah Zdanowska,et al. A study of UX practitioners roles in designing real-world, enterprise ML systems , 2022, CHI.
[12] K. Hosanagar,et al. Designing Fair AI in Human Resource Management: Understanding Tensions Surrounding Algorithmic Evaluation and Envisioning Stakeholder-Centered Solutions , 2022, CHI.
[13] Paweł W. Woźniak,et al. ‘It Is Not Always Discovery Time’: Four Pragmatic Approaches in Designing AI Systems , 2022, CHI.
[14] A. Yadav,et al. Automatic Text Summarization Methods: A Comprehensive Review , 2022, ArXiv.
[15] Percy Liang,et al. CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities , 2022, CHI.
[16] Joon Sik Kim,et al. Interpretable Machine Learning , 2021, ACM Queue.
[17] Hanna M. Wallach,et al. Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support , 2021, Proc. ACM Hum. Comput. Interact..
[18] Michael A. Madaio,et al. Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and Stir" , 2021, ArXiv.
[19] Kush R. Varshney,et al. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences , 2021, ArXiv.
[20] Karen L. Boyd. Datasheets for Datasets help ML Engineers Notice and Understand Ethical Issues in Training Data , 2021, Proc. ACM Hum. Comput. Interact..
[21] Hanna M. Wallach,et al. A Human-Centered Agenda for Intelligible Machine Learning , 2021 .
[22] Jiahao Lu,et al. THE IMPACT OF DATA ON THE ROLE OF DESIGNERS AND THEIR PROCESS , 2021, Proceedings of the Design Society.
[23] Ben Shneiderman,et al. Responsible AI , 2021, Commun. ACM.
[24] Bing Qin,et al. A Survey on Dialogue Summarization: Recent Advances and New Frontiers , 2021, IJCAI.
[25] Carrie J. Cai,et al. Onboarding Materials as Cross-functional Boundary Objects for Developing AI Assistants , 2021, CHI Extended Abstracts.
[26] Adam Fourney,et al. Planning for Natural Language Failures with the AI Playbook , 2021, CHI.
[27] Eytan Adar,et al. Towards A Process Model for Co-Creating AI Experiences , 2021, Conference on Designing Interactive Systems.
[28] Eytan Adar,et al. ProtoAI: Model-Informed Prototyping for AI-Powered Interfaces , 2021, IUI.
[29] Daby M. Sow,et al. Question-Driven Design Process for Explainable AI User Experiences , 2021, ArXiv.
[30] Solon Barocas,et al. Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs , 2021, AIES.
[31] Alec Radford,et al. Zero-Shot Text-to-Image Generation , 2021, ICML.
[32] Arvind Satyanarayan,et al. Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs , 2021, CHI.
[33] Mohit Bansal,et al. Robustness Gym: Unifying the NLP Evaluation Landscape , 2021, NAACL.
[34] Arne Berger,et al. Machine Learning Uncertainty as a Design Material: A Post-Phenomenological Inquiry , 2021, CHI.
[35] Laura Forlano,et al. Participation Is not a Design Fix for Machine Learning , 2020, EAAMO.
[36] Henriette Cramer,et al. Where Responsible AI meets Reality , 2020, Proc. ACM Hum. Comput. Interact..
[37] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[38] Solon Barocas,et al. Language (Technology) is Power: A Critical Survey of “Bias” in NLP , 2020, ACL.
[39] Sameer Singh,et al. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList , 2020, ACL.
[40] Sungsoo Ray Hong,et al. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs , 2020, Proc. ACM Hum. Comput. Interact..
[41] Harmanpreet Kaur,et al. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning , 2020, CHI.
[42] Hanna M. Wallach,et al. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI , 2020, CHI.
[43] Qian Yang,et al. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design , 2020, CHI.
[44] Solon Barocas,et al. When not to design, build, or deploy , 2020, FAT*.
[45] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[46] Inioluwa Deborah Raji,et al. Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing , 2020, FAT*.
[47] Guang-Zhong Yang,et al. XAI—Explainable artificial intelligence , 2019, Science Robotics.
[48] Kush R. Varshney,et al. Experiences with Improving the Transparency of AI Models and Services , 2019, CHI Extended Abstracts.
[49] Henriette Cramer,et al. Confronting the tensions where UX meets AI , 2019, Interactions.
[50] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[51] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[52] Zhiwei Steven Wu,et al. Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives , 2019, Conference on Designing Interactive Systems.
[53] Richard Benjamins,et al. Responsible AI by Design in Practice , 2019 .
[54] Ankur Taly,et al. Explainable machine learning in deployment , 2019, FAT*.
[55] Jeffrey Heer,et al. Errudite: Scalable, Reproducible, and Testable Error Analysis , 2019, ACL.
[56] Elisa Giaccardi,et al. Designing and Prototyping from the Perspective of AI in the Wild , 2019, Conference on Designing Interactive Systems.
[57] Qian Yang,et al. Sketching NLP: A Case Study of Exploring the Right Things To Design with Language Intelligence , 2019, CHI.
[58] Douglas Eck,et al. Identifying the Intersections: User Experience + Research Scientist Collaboration in a Generative Machine Learning Interface , 2019, CHI Extended Abstracts.
[59] David Ribes,et al. "Beautiful Seams": Strategic Revelations and Concealments , 2019, CHI.
[60] A. Chouldechova,et al. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services , 2019, CHI.
[61] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.
[62] Paul N. Bennett,et al. Guidelines for Human-AI Interaction , 2019, CHI.
[63] Dominik Dellermann,et al. The Future of Human-AI Collaboration: A Taxonomy of Design Knowledge for Hybrid Intelligence Systems , 2019, HICSS.
[64] Miroslav Dudík,et al. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? , 2018, CHI.
[65] Emily M. Bender,et al. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 2018, TACL.
[66] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[67] Kush R. Varshney,et al. Increasing Trust in AI Services through Supplier's Declarations of Conformity , 2018, IBM J. Res. Dev..
[68] Tim Kraska,et al. Slice Finder: Automated Data Slicing for Model Validation , 2018, 2019 IEEE 35th International Conference on Data Engineering (ICDE).
[69] John Zimmerman,et al. Investigating How Experienced UX Designers Effectively Work with Machine Learning , 2018, Conference on Designing Interactive Systems.
[70] Timnit Gebru,et al. Datasheets for datasets , 2018, Commun. ACM.
[71] Mark Bilandzic,et al. Bringing Transparency Design into Practice , 2018, IUI.
[72] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[73] Krys J. Kochut,et al. Text Summarization Techniques: A Brief Survey , 2017, International Journal of Advanced Computer Science and Applications.
[74] Lars Erik Holmquist,et al. Intelligence on tap , 2017, Interactions.
[75] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[76] Kim Halskov,et al. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material , 2017, CHI.
[77] David Maxwell Chickering,et al. ModelTracker: Redesigning Performance Analysis Tools for Machine Learning , 2015, CHI.
[78] Elvin Karana,et al. Foundations of Materials Experience: An Approach for HCI , 2015, CHI.
[79] Ylva Fernaeus,et al. The material move how materials matter in interaction design research , 2012, DIS '12.
[80] Raya Fidel,et al. Human Information Interaction: An Ecological Approach to Information Behavior , 2012 .
[81] T. Lombrozo. Explanation and Abductive Inference , 2012 .
[82] Mikael Wiberg,et al. Texturing the "material turn" in interaction design , 2010, TEI '10.
[83] Alan J. Dix,et al. Designing for appropriation , 2007, BCS HCI.
[84] Bill Buxton,et al. Sketching User Experiences: Getting the Design Right and the Right Design , 2007 .
[85] T. Lombrozo. The structure and function of explanations , 2006, Trends in Cognitive Sciences.
[86] Matthew Chalmers,et al. Seamful interweaving: heterogeneity in the theory and design of interactive systems , 2004, DIS '04.
[87] Allison Druin,et al. Technology probes: inspiring design for and with families , 2003, CHI '03.
[88] Nigel Cross,et al. Creativity in the design process: co-evolution of problem–solution , 2001 .
[89] Reijo Savolainen,et al. The Sense-Making Theory: Reviewing the Interests of a User-Centred Approach to Information Seeking and Use , 1993, Inf. Process. Manag..
[90] A. Strauss. Basics Of Qualitative Research , 1992 .
[91] Richard Buchanan,et al. Wicked Problems in Design Thinking , 1992 .
[92] Qian Yang,et al. Machine Learning as a UX Design Material: How Can We Imagine Beyond Automation, Recommenders, and Reminders? , 2018, AAAI Spring Symposia.
[93] F. Keil,et al. Explanation and understanding , 2015 .