Design Implications for Explanations: A Case Study on Supporting Reflective Assessment of Potentially Misleading Videos

Online videos have become a prevalent means for people to acquire information. Videos, however, are often polarized, misleading, or contain topics on which people have different, contradictory views. In this work, we introduce natural language explanations to stimulate more deliberate reasoning about videos and raise users’ awareness of potentially deceiving or biased information. With these explanations, we aim to support users in actively deciding and reflecting on the usefulness of the videos. We generate the explanations through an end-to-end pipeline that extracts reflection triggers so users receive additional information to the video based on its source, covered topics, communicated emotions, and sentiment. In a between-subjects user study, we examine the effect of showing the explanations for videos on three controversial topics. Besides, we assess the users’ alignment with the video’s message and how strong their belief is about the topic. Our results indicate that respondents’ alignment with the video’s message is critical to evaluate the video’s usefulness. Overall, the explanations were found to be useful and of high quality. While the explanations do not influence the perceived usefulness of the videos compared to only seeing the video, people with an extreme negative alignment with a video’s message perceived it as less useful (with or without explanations) and felt more confident in their assessment. We relate our findings to cognitive dissonance since users seem to be less receptive to explanations when the video’s message strongly challenges their beliefs. Given these findings, we provide a set of design implications for explanations grounded in theories on reducing cognitive dissonance in light of raising awareness about online deception.

[1]  Yimin Chen,et al.  Misleading Online Content: Recognizing Clickbait as "False News" , 2015, WMDD@ICMI.

[2]  Denilson Barbosa,et al.  Identifying controversial articles in Wikipedia: a comparative study , 2012, WikiSym '12.

[3]  D. Kahneman,et al.  Representativeness revisited: Attribute substitution in intuitive judgment. , 2002 .

[4]  H. Simon,et al.  A Behavioral Model of Rational Choice , 1955 .

[5]  Ullrich K. H. Ecker,et al.  Misinformation and Its Correction , 2012, Psychological science in the public interest : a journal of the American Psychological Society.

[6]  Kavitha Karimbi Mahesh,et al.  Analysis and Classification of User Comments on YouTube Videos , 2020, EUSPN/ICTH.

[7]  Frank Goldhammer,et al.  Evaluation of Online Information in University Students: Development and Scaling of the Screening Instrument EVON , 2020, Frontiers in Psychology.

[8]  Gerhard Weikum,et al.  Credibility Assessment of Textual Claims on the Web , 2016, CIKM.

[9]  Lora Aroyo,et al.  On the role of user-generated metadata in audio visual collections , 2011, K-CAP '11.

[10]  April L. McGrath Dealing with dissonance: A review of cognitive dissonance reduction , 2017 .

[11]  Hicham Hage,et al.  The Scourge of Online Deception in Social Networks , 2018, 2018 International Conference on Computational Science and Computational Intelligence (CSCI).

[12]  Sabri Pllana,et al.  PAPA: A parallel programming assistant powered by IBM Watson cognitive computing technology , 2018, J. Comput. Sci..

[13]  R. Nickerson Confirmation Bias: A Ubiquitous Phenomenon in Many Guises , 1998 .

[14]  Judith Masthoff,et al.  Modelling a Receiver's Position to Persuasive Arguments , 2007, PERSUASIVE.

[15]  Michael A. Horning,et al.  FeedReflect: A Tool for Nudging Users to Assess News Credibility on Twitter , 2018, CSCW Companion.

[16]  Frank Schweitzer,et al.  Political polarization and popularity in online participatory media: an integrated approach , 2012, PLEAD '12.

[17]  Jonathan Evans Dual-processing accounts of reasoning, judgment, and social cognition. , 2008, Annual review of psychology.

[18]  A. Tesser,et al.  Some effects of time and thought on attitude polarization. , 1975 .

[19]  Barbara Poblete,et al.  Information credibility on twitter , 2011, WWW.

[20]  Harmanpreet Kaur,et al.  Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning , 2020, CHI.

[21]  Wim Westera,et al.  Using reflection triggers while learning in an online course , 2012, Br. J. Educ. Technol..

[22]  Paolo Rosso,et al.  Leveraging Emotional Signals for Credibility Detection , 2019, SIGIR.

[23]  J. Brehm,et al.  Explorations in Cognitive Dissonance , 1962 .

[24]  Ana-Maria Popescu,et al.  Detecting controversial events from twitter , 2010, CIKM.

[25]  Sibel Adali,et al.  Credibility in Context: An Analysis of Feature Distributions in Twitter , 2012, 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing.

[26]  Eric Gilbert,et al.  A Parsimonious Language Model of Social Media Credibility Across Disparate Events , 2017, CSCW.

[27]  Mark Crovella,et al.  How YouTube Leads Privacy-Seeking Users Away from Reliable Information , 2020, UMAP.

[28]  Patricia A Iannuzzi Information Literacy Competency Standards for Higher Education , 2000 .

[29]  K. Weick FROM SENSEMAKING IN ORGANIZATIONS , 2021, The New Economic Sociology.

[30]  Ponnurangam Kumaraguru,et al.  TweetCred: Real-Time Credibility Assessment of Content on Twitter , 2014, SocInfo.

[31]  Francine Chen,et al.  Video to Text Summary: Joint Video Summarization and Captioning with Recurrent Neural Networks , 2017, BMVC.

[32]  Adam Wierman,et al.  Thinking Fast and Slow , 2017, SIGMETRICS Perform. Evaluation Rev..

[33]  Katsumi Tanaka,et al.  Using a sentiment map for visualizing credibility of news sites on the web , 2008, WICOW '08.

[34]  Vera D. Khovanskaya,et al.  Reviewing reflection: on the use of reflection in interactive system design , 2014, Conference on Designing Interactive Systems.

[35]  Katherine L. Milkman,et al.  Emotion and Virality: What Makes Online Content Go Viral? , 2013 .

[36]  Hend Suliman Al-Khalifa,et al.  An experimental system for measuring the credibility of news content in Twitter , 2011, Int. J. Web Inf. Syst..

[37]  Adrian Holzer,et al.  Digitally Scaffolding Debate in the Classroom , 2018, CHI Extended Abstracts.

[38]  Wim Westera,et al.  Infusing Reflective Practice in eLearning Courses - Can Widgets Help? , 2011, MUPPLE@EC-TEL.

[39]  Nayer M. Wanas,et al.  Automatic scoring of online discussion posts , 2008, WICOW '08.

[40]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[41]  Andrew Chadwick,et al.  Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News , 2020, Social Media + Society.

[42]  Nava Tintarev,et al.  Evaluating the effectiveness of explanations for recommender systems , 2012, User Modeling and User-Adapted Interaction.

[43]  Anand K. Gramopadhye,et al.  Healthcare information on YouTube: A systematic review , 2015, Health Informatics J..

[44]  Ullrich K. H. Ecker,et al.  Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence , 2017, PloS one.

[45]  Stefan Stieglitz,et al.  Emotions and Information Diffusion in Social Media—Sentiment of Microblogs and Sharing Behavior , 2013, J. Manag. Inf. Syst..

[46]  Lora Aroyo,et al.  Eliciting User Preferences for Personalized Explanations for Video Summaries , 2020, UMAP.

[47]  Kyungsik Han,et al.  How Do Humans Assess the Credibility on Web Blogs: Qualifying and Verifying Human Factors with Machine Learning , 2019, CHI.

[48]  Arkaitz Zubiaga,et al.  Mining social media for newsgathering: A review , 2018, Online Soc. Networks Media.

[49]  Andrew Shtulman,et al.  Epistemic Similarities between Students' Scientific and Supernatural Beliefs. , 2013 .

[50]  Symeon Papadopoulos,et al.  The InVID Plug-in: Web Video Verification on the Browser , 2017, MuVer@MM.

[51]  Nava Tintarev,et al.  Explanations of recommendations , 2007, RecSys '07.

[52]  Alan Borning,et al.  Integrating on-demand fact-checking with public dialogue , 2014, CSCW.

[53]  Luigi De Russis,et al.  A Comparison and Critique of Natural Language Understanding Tools , 2018 .

[54]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[55]  Asha Shetty,et al.  Video Description Based YouTube Comment Classification , 2021 .

[56]  Taylor W. Brown,et al.  Exposure to opposing views on social media can increase political polarization , 2018, Proceedings of the National Academy of Sciences.

[57]  Shinsuke Nakajima,et al.  Sentiment Bias Detection in Support of News Credibility Judgment , 2011, 2011 44th Hawaii International Conference on System Sciences.

[58]  Saskia Brand-Gruwel,et al.  Information-problem solving: A review of problems students encounter and instructional solutions , 2008, Comput. Hum. Behav..

[59]  Daniel Kahneman,et al.  Availability: A heuristic for judging frequency and probability , 1973 .

[60]  Toward Natural Language Mitigation Strategies for Cognitive Biases in Recommender Systems , 2020, NL4XAI.

[61]  Nava Tintarev,et al.  Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces , 2021, FAccT.

[62]  Brian Harmer,et al.  YouTube: Online Video and Participatory Culture , 2010 .

[63]  Daniel Duckworth,et al.  Preparing for Life in a Digital World , 2020 .

[64]  P. Ekman An argument for basic emotions , 1992 .