A Human-Centered Agenda for Intelligible Machine Learning
暂无分享,去创建一个
[1] Paul N. Bennett,et al. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems , 2019, CHI.
[2] Cynthia Rudin,et al. Optimized Scoring Systems: Toward Trust in Machine Learning for Healthcare and Criminal Justice , 2018, Interfaces.
[3] Chris Russell,et al. Efficient Search for Diverse Coherent Explanations , 2019, FAT.
[4] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.
[5] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[6] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[7] Jure Leskovec,et al. Faithful and Customizable Explanations of Black Box Models , 2019, AIES.
[8] Rachel K. E. Bellamy,et al. Explaining models an empirical study of how explanations impact fairness judgment , 2019 .
[9] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[10] Eunsol Choi,et al. QuAC: Question Answering in Context , 2018, EMNLP.
[11] Todd Kulesza,et al. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent , 2012, CHI.
[12] Karrie Karahalios,et al. The Illusion of Control: Placebo Effects of Control Settings , 2018, CHI.
[13] Nazli Ikizler-Cinbis,et al. RecipeQA: A Challenge Dataset for Multimodal Comprehension of Cooking Recipes , 2018, EMNLP.
[14] Thomas G. Dietterich,et al. Interacting meaningfully with machine learning systems: Three experiments , 2009, Int. J. Hum. Comput. Stud..
[15] Tommi S. Jaakkola,et al. On the Robustness of Interpretability Methods , 2018, ArXiv.
[16] Brandin Hanson Knowles. Intelligibility in the Face of Uncertainty , 2017 .
[17] David Maxwell Chickering,et al. ModelTracker: Redesigning Performance Analysis Tools for Machine Learning , 2015, CHI.
[18] Anind K. Dey,et al. Design of an intelligible mobile context-aware application , 2011, Mobile HCI.
[19] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[20] Weng-Keen Wong,et al. Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.
[21] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[22] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[23] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[24] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[25] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[26] D. Goldstein,et al. Simple Rules for Complex Decisions , 2017, 1702.04690.
[27] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[28] Samuel J. Gershman,et al. Human Evaluation of Models Built for Interpretability , 2019, HCOMP.
[29] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[30] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[31] Rocky Ross,et al. Mental models , 2004, SIGA.
[32] Samuel J. Gershman,et al. Human-in-the-Loop Interpretability Prior , 2018, NeurIPS.
[33] W. Keith Edwards,et al. Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..
[34] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[35] Judith Masthoff,et al. Designing and Evaluating Explanations for Recommender Systems , 2011, Recommender Systems Handbook.
[36] Timnit Gebru,et al. Datasheets for datasets , 2018, Commun. ACM.
[37] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[38] Andrea Bunt,et al. Are explanations always important?: a study of deployed, low-cost intelligent interactive systems , 2012, IUI '12.
[39] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[40] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[41] Ming Yin,et al. Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.
[42] Thomas S. Woodson. Weapons of math destruction , 2018, Journal of Responsible Innovation.
[43] Harmanpreet Kaur,et al. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning , 2020, CHI.
[44] P. Johnson-Laird,et al. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness , 1985 .
[45] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[46] Daniel S. Weld,et al. The challenge of crafting intelligible intelligence , 2018, Commun. ACM.
[47] Kush R. Varshney,et al. Increasing Trust in AI Services through Supplier's Declarations of Conformity , 2018, IBM J. Res. Dev..
[48] Donald A. Norman,et al. Some observations on mental models , 1987 .
[49] David McSherry,et al. Explanation in Recommender Systems , 2005, Artificial Intelligence Review.
[50] Dympna O'Sullivan,et al. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.
[51] Bongshin Lee,et al. Squares: Supporting Interactive Performance Analysis for Multiclass Classifiers , 2017, IEEE Transactions on Visualization and Computer Graphics.
[52] Johannes Gehrke,et al. Accurate intelligible models with pairwise interactions , 2013, KDD.
[53] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[54] Anind K. Dey,et al. Investigating intelligibility for uncertain context-aware applications , 2011, UbiComp '11.
[55] Hanna M. Wallach,et al. Weight of Evidence as a Basis for Human-Oriented Explanations , 2019, ArXiv.
[56] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[57] Dean C. Barnlund. A TRANSACTIONAL MODEL OF COMMUNICATION , 1970 .
[58] Emily M. Bender,et al. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 2018, TACL.
[59] R. Dawes. Judgment under uncertainty: The robust beauty of improper linear models in decision making , 1979 .
[60] Samir Elhedhli,et al. The Effectiveness of Simple Decision Heuristics: Forecasting Commercial Success for Early-Stage Ventures , 2006, Manag. Sci..
[61] Cynthia Rudin,et al. Supersparse linear integer models for optimized medical scoring systems , 2015, Machine Learning.
[62] Jon M. Kleinberg,et al. Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability , 2018, EC.
[63] Michael Veale,et al. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making , 2018, CHI.
[64] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[65] Tommi S. Jaakkola,et al. A causal framework for explaining the predictions of black-box sequence-to-sequence models , 2017, EMNLP.