暂无分享,去创建一个
Han Liu | Chenhao Tan | Vivian Lai | Chenhao Tan | Vivian Lai | Han Liu
[1] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[2] Sinan Aral,et al. The spread of true and false news online , 2018, Science.
[3] Cynthia Rudin,et al. The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification , 2014, NIPS.
[4] Q. Ye,et al. The impact of e-word-of-mouth on the online popularity of restaurants: a comparison of consumer reviews and editor reviews. , 2010 .
[5] Claire Cardie,et al. Towards a General Rule for Identifying Deceptive Opinion Spam , 2014, ACL.
[6] Derek Greene,et al. Distortion as a validation criterion in the identification of suspicious reviews , 2010, SOMA '10.
[7] Vivian Lai,et al. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection , 2018, FAT.
[8] Steven Myers,et al. Prevalence and mitigation of forum spamming , 2011, 2011 Proceedings IEEE INFOCOM.
[9] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[10] Chenhao Tan,et al. Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification , 2019, EMNLP/IJCNLP.
[11] Mohamed Abouelenien,et al. Verbal and Nonverbal Clues for Real-life Deception Detection , 2015, EMNLP.
[12] K. Pauwels,et al. Effects of Word-of-Mouth versus Traditional Marketing: Findings from an Internet Social Networking Site , 2009 .
[13] Franco Turini,et al. Local Rule-Based Explanations of Black Box Decision Systems , 2018, ArXiv.
[14] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[15] Chris Russell,et al. Efficient Search for Diverse Coherent Explanations , 2019, FAT.
[16] Jürgen Ziegler,et al. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems , 2019, CHI.
[17] J. Kleinberg,et al. Prediction Policy Problems. , 2015, The American economic review.
[18] Yejin Choi,et al. Syntactic Stylometry for Deception Detection , 2012, ACL.
[19] Jure Leskovec,et al. Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.
[20] Claire Cardie,et al. Finding Deceptive Opinion Spam by Any Stretch of the Imagination , 2011, ACL.
[21] Ben Green,et al. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments , 2019, FAT.
[22] Wei Chen,et al. The influence of user-generated content on traveler behavior: An empirical investigation on the effects of e-word-of-mouth to hotel online bookings , 2011, Comput. Hum. Behav..
[23] Janni Nielsen,et al. Getting access to what goes on in people's heads?: reflections on the think-aloud technique , 2002, NordiCHI '02.
[24] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[25] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[26] Miriam J. Metzger,et al. The science of fake news , 2018, Science.
[27] Eric Horvitz,et al. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance , 2019, HCOMP.
[28] Avner Caspi,et al. Online Deception: Prevalence, Motivation, and Emotion , 2006, Cyberpsychology Behav. Soc. Netw..
[29] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[30] A. Vrij. Detecting Lies and Deceit: The Psychology of Lying and the Implications for Professional Practice , 2000 .
[31] John D. Lee,et al. Trust in Automation: Designing for Appropriate Reliance , 2004 .
[32] D. Lazer,et al. Fake news on Twitter during the 2016 U.S. presidential election , 2019, Science.
[33] Amit Sharma,et al. Explaining machine learning classifiers through diverse counterfactual explanations , 2020, FAT*.
[34] Claire Cardie,et al. Negative Deceptive Opinion Spam , 2013, NAACL.
[35] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[36] Joachim Diederich,et al. Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..
[37] David R. Karger,et al. A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles , 2018, WWW.
[38] Bernhard Schölkopf,et al. Enhancing human learning via spaced repetition optimization , 2019, Proceedings of the National Academy of Sciences.
[39] M. Gentzkow,et al. Social Media and Fake News in the 2016 Election , 2017 .
[40] Bing Liu,et al. Opinion spam and analysis , 2008, WSDM '08.
[41] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[42] Mohamed Abouelenien,et al. Deception detection using a multimodal approach , 2014, ICMI.
[43] B. Depaulo,et al. Accuracy of Deception Judgments , 2006, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.
[44] Bo Pang,et al. The effect of wording on message propagation: Topic- and author-controlled natural experiments on Twitter , 2014, ACL.
[45] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[46] Ming Yin,et al. Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.
[47] S Lewandowsky,et al. The dynamics of trust: comparing humans to automation. , 2000, Journal of experimental psychology. Applied.
[48] Sean H. K. Kang. Spaced Repetition Promotes Efficient and Effective Learning , 2016 .
[49] BEN GREEN,et al. The Principles and Limits of Algorithm-in-the-Loop Decision Making , 2019, Proc. ACM Hum. Comput. Interact..
[50] Mykola Pechenizkiy,et al. A Human-Grounded Evaluation of SHAP for Alert Processing , 2019, ArXiv.
[51] Graeme Hirst,et al. Detecting Deceptive Opinions with Profile Compatibility , 2013, IJCNLP.
[52] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[53] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[54] Dympna O'Sullivan,et al. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.
[55] Sibel Adali,et al. Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone? , 2019, ICWSM.
[56] Martin Wattenberg,et al. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making , 2019, CHI.
[57] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[58] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[59] Claire Cardie,et al. Estimating the prevalence of deception in online review communities , 2012, WWW.
[60] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[61] Kyung Hyan Yoo,et al. Comparison of Deceptive and Truthful Travel Reviews , 2009, ENTER.
[62] Andreas Krause,et al. Submodular Function Maximization , 2014, Tractability.
[63] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.