暂无分享,去创建一个
Raymond Fok | Daniel S. Weld | Daniel S. Weld | Ece Kamar | Gagan Bansal | Besmira Nushi | Tongshuang Wu | Joyce Zhu | Marco Tulio Ribeiro | Marco Tulio Ribeiro | Ece Kamar | Gagan Bansal | Besmira Nushi | Raymond Fok | Tongshuang Sherry Wu | Joyce Zhou
[1] Jure Leskovec,et al. Learning Attitudes and Attributes from Multi-aspect Reviews , 2012, 2012 IEEE 12th International Conference on Data Mining.
[2] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[3] Gregory D. Abowd,et al. Towards a Better Understanding of Context and Context-Awareness , 1999, HUC.
[4] Noah A. Smith,et al. Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories , 2018, IUI.
[5] Dong Nguyen,et al. Comparing Automatic and Human Evaluation of Local Explanations for Text Classification , 2018, NAACL.
[6] Mohit Bansal,et al. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? , 2020, ACL.
[7] Mark Braverman,et al. Data-Driven Decisions for Reducing Readmissions for Heart Failure: General Methodology and Case Study , 2014, PloS one.
[8] Xiaoli Z. Fern,et al. Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference , 2018, EMNLP.
[9] Han Liu,et al. "Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans , 2020, CHI.
[10] Yugo Hayashi,et al. Can AI become Reliable Source to Support Human Decision Making in a Court Scene? , 2017, CSCW Companion.
[11] Jure Leskovec,et al. Interpretable & Explorable Approximations of Black Box Models , 2017, ArXiv.
[12] Daniel S. Weld,et al. The challenge of crafting intelligible intelligence , 2018, Commun. ACM.
[13] Milind Tambe,et al. Learning to Prescribe Interventions for Tuberculosis Patients Using Digital Adherence Data , 2019, KDD.
[14] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[15] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[16] E. Rowland. Theory of Games and Economic Behavior , 1946, Nature.
[17] Jürgen Ziegler,et al. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems , 2019, CHI.
[18] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[19] Amy Bruckman,et al. Does Transparency in Moderation Really Matter? , 2019, Proc. ACM Hum. Comput. Interact..
[20] Naveena Karusala,et al. Street-Level Realities of Data Practices in Homeless Services Provision , 2019, Proc. ACM Hum. Comput. Interact..
[21] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[22] Zahra Ashktorab,et al. Mental Models of AI Agents in a Cooperative Game Setting , 2020, CHI.
[23] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[24] F. Strack,et al. Playing Dice With Criminal Sentences: The Influence of Irrelevant Anchors on Experts’ Judicial Decision Making , 2006, Personality & social psychology bulletin.
[25] Yunfeng Zhang,et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making , 2020, FAT*.
[26] Jean Scholtz,et al. How do visual explanations foster end users' appropriate trust in machine learning? , 2020, IUI.
[27] Philip J. Guo,et al. OverCode: visualizing variation in student solutions to programming problems at scale , 2014, ACM Trans. Comput. Hum. Interact..
[28] Limor Nadav-Greenberg,et al. Uncertainty Forecasts Improve Decision Making Among Nonexperts , 2009 .
[29] Milind Tambe,et al. Stay Ahead of Poachers: Illegal Wildlife Poaching Prediction and Patrol Planning Under Uncertainty with Field Test Evaluations (Short Version) , 2019, 2020 IEEE 36th International Conference on Data Engineering (ICDE).
[30] Randall D. Beer,et al. A Dynamical Systems Perspective on Agent-Environment Interaction , 1995, Artif. Intell..
[31] Eric Horvitz,et al. Complementary computing: policies for transferring callers from dialog systems to human receptionists , 2006, User Modeling and User-Adapted Interaction.
[32] Eric Horvitz,et al. Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration , 2016, AAAI.
[33] Shi Feng,et al. What can AI do for me?: evaluating machine learning interpretations in cooperative play , 2019, IUI.
[34] Devi Parikh,et al. Do explanations make VQA models more predictable to a human? , 2018, EMNLP.
[35] Ankur Taly,et al. Explainable machine learning in deployment , 2020, FAT*.
[36] Raymond J. Mooney,et al. Explaining Recommendations: Satisfaction vs. Promotion , 2005 .
[37] Jorge Gonçalves,et al. Crowdsourcing Perceptions of Fair Predictors for Machine Learning , 2019, Proc. ACM Hum. Comput. Interact..
[38] Byron C. Wallace,et al. ERASER: A Benchmark to Evaluate Rationalized NLP Models , 2020, ACL.
[39] Sameer Singh,et al. AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models , 2019, EMNLP.
[40] Eric Horvitz,et al. Principles of mixed-initiative user interfaces , 1999, CHI '99.
[41] Bowen Zhou,et al. A Structured Self-attentive Sentence Embedding , 2017, ICLR.
[42] Felix Bießmann,et al. Quantifying Interpretability and Trust in Machine Learning Systems , 2019, ArXiv.
[43] Derek J. Koehler,et al. Explanation, imagination, and confidence in judgment. , 1991, Psychological bulletin.
[44] Harmanpreet Kaur,et al. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning , 2020, CHI.
[45] Fang Chen,et al. Do I trust my machine teammate?: an investigation from perception to decision , 2019, IUI.
[46] David Sontag,et al. Consistent Estimators for Learning to Defer to an Expert , 2020, ICML.
[47] Toniann Pitassi,et al. Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer , 2017, NeurIPS.
[48] Li Zhao,et al. Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.
[49] Thomas G. Dietterich,et al. Interacting meaningfully with machine learning systems: Three experiments , 2009, Int. J. Hum. Comput. Stud..
[50] Dympna O'Sullivan,et al. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.
[51] Long Tran-Thanh,et al. Utilizing Housing Resources for Homeless Youth Through the Lens of Multiple Multi-Dimensional Knapsacks , 2018, AIES.
[52] Krzysztof Z. Gajos,et al. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems , 2020, IUI.
[53] John D. Lee,et al. Trust in Automation: Designing for Appropriate Reliance , 2004 .
[54] Mykola Pechenizkiy,et al. A Human-Grounded Evaluation of SHAP for Alert Processing , 2019, ArXiv.
[55] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[56] Daniel S. Weld,et al. Optimizing AI for Teamwork , 2020, ArXiv.
[57] Pat Croskerry,et al. Clinical cognition and diagnostic error: applications of a dual process model of reasoning , 2009, Advances in health sciences education : theory and practice.
[58] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[59] Kori Inkpen Quinn,et al. Investigating Human + Machine Complementarity for Recidivism Predictions , 2018, ArXiv.
[60] Richard B. Berlin,et al. A Slow Algorithm Improves Users' Assessments of the Algorithm's Accuracy , 2019, Proc. ACM Hum. Comput. Interact..
[61] BEN GREEN,et al. The Principles and Limits of Algorithm-in-the-Loop Decision Making , 2019, Proc. ACM Hum. Comput. Interact..
[62] Julian J. McAuley,et al. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering , 2016, WWW.
[63] Pramod K. Varshney,et al. Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory , 2018, ArXiv.
[64] T. Levine. Truth-Default Theory (TDT) , 2014 .
[65] Scott M. Lundberg,et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery , 2018, Nature Biomedical Engineering.
[66] Jenna Wiens,et al. Patient Risk Stratification with Time-Varying Parameters: A Multitask Learning Approach , 2016, J. Mach. Learn. Res..
[67] Qian Yang,et al. Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.
[68] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[69] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[70] Sungsoo Ray Hong,et al. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs , 2020, Proc. ACM Hum. Comput. Interact..
[71] Jiashi Feng,et al. ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning , 2020, ICLR.
[72] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[73] Dimitra Gkatzia,et al. Natural Language Generation enhances human decision-making with uncertain information , 2016, ACL.
[74] H. D. Brunk,et al. The Isotonic Regression Problem and its Dual , 1972 .
[75] Eric Horvitz,et al. Learning to Complement Humans , 2020, IJCAI.
[76] Pietro Perona,et al. Teaching Categories to Human Learners with Visual Explanations , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[77] Aleksandrs Slivkins,et al. Incentivizing high quality crowdwork , 2015, SECO.
[78] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[79] Vivian Lai,et al. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection , 2018, FAT.
[80] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[81] Angli Liu,et al. Effective Crowd Annotation for Relation Extraction , 2016, NAACL.
[82] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[83] Aaron Halfaker,et al. Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems , 2020, CHI.
[84] Eric Horvitz,et al. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance , 2019, HCOMP.
[85] S. Joslyn,et al. Decisions With Uncertainty: The Glass Half Full , 2013 .
[86] Lauren Wilcox,et al. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making , 2019, Proc. ACM Hum. Comput. Interact..
[87] Eric Horvitz,et al. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff , 2019, AAAI.
[88] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[89] Jason Weston,et al. Finding Generalizable Evidence by Learning to Convince Q&A Models , 2019, EMNLP.
[90] Sean A. Munson,et al. Uncertainty Displays Using Quantile Dotplots or CDFs Improve Transit Decision-Making , 2018, CHI.
[91] Zachary C. Lipton,et al. The mythos of model interpretability , 2018, Commun. ACM.
[92] T. Lombrozo,et al. Simplicity and probability in causal explanation , 2007, Cognitive Psychology.
[93] Les Macleod,et al. Avoiding "groupthink": a manager's challenge. , 2011, Nursing management.