Misinfo Belief Frames: A Case Study on Covid & Climate News

Prior beliefs of readers impact the way in which they project meaning onto news headlines. These beliefs can influence their perception of news reliability, as well as their reaction to news, and their likelihood of spreading the misinformation through social networks. However, most prior work focuses on fact-checking veracity of news or stylometry rather than measuring impact of misinformation. We propose Misinfo Belief Frames, a formalism for understanding how readers perceive the reliability of news and the impact of misinformation. We also introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines. Misinformation frames use commonsense reasoning to uncover implications of real and fake news headlines focused on global crises: the Covid-19 pandemic and climate change. Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines (readers' trust in news headlines was affected in 29.3% of cases). This demonstrates the potential effectiveness of using generated frames to counter misinformation.

[1]  Christine Geeng,et al.  Fake News on Facebook and Twitter: Investigating How People (Don't) Investigate , 2020, CHI.

[2]  Jiliang Tang,et al.  Multi-Source Multi-Class Fake News Detection , 2018, COLING.

[3]  William Yang Wang “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection , 2017, ACL.

[4]  Svitlana Volkova,et al.  Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on Twitter , 2017, ACL.

[5]  Hung-Yu Kao,et al.  Fake News Detection as Natural Language Inference , 2019, ArXiv.

[6]  Scott A. Hale,et al.  Detecting East Asian Prejudice on Social Media , 2020, ALW.

[7]  Ismini Lourentzou,et al.  Drink bleach or do what now? Covid-HeRA: A dataset for risk-informed health decision making in the presence of COVID19 misinformation , 2020, ArXiv.

[8]  Yejin Choi,et al.  Social Chemistry 101: Learning to Reason about Social and Moral Norms , 2020, EMNLP.

[9]  Marcel Salathé,et al.  COVID-Twitter-BERT: A natural language processing model to analyse COVID-19 content on Twitter , 2020, Frontiers in Artificial Intelligence.

[10]  Isabelle Augenstein,et al.  A Survey on Stance Detection for Mis- and Disinformation Identification , 2021, NAACL-HLT.

[11]  Catherine Havasi,et al.  Representing General Relational Knowledge in ConceptNet 5 , 2012, LREC.

[12]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[13]  Yulia Tsvetkov,et al.  Framing and Agenda-setting in Russian News: a Computational Analysis of Intricate Political Strategies , 2018, EMNLP.

[14]  Kilian Q. Weinberger,et al.  BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.

[15]  Noah A. Smith,et al.  Social Bias Frames: Reasoning about Social and Power Implications of Language , 2019, ACL.

[16]  I. Apperly Mindreaders: The Cognitive Basis of "Theory of Mind" , 2010 .

[17]  Limeng Cui,et al.  CoAID: COVID-19 Healthcare Misinformation Dataset , 2020, ArXiv.

[18]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[19]  Victoria L. Rubin,et al.  Fake News or Truth? Using Satirical Cues to Detect Potentially Misleading News , 2016 .

[20]  Mor Naaman,et al.  VoterFraud2020: a Multi-modal Dataset of Election Fraud Claims on Twitter , 2021, ICWSM.

[21]  Sibel Adali,et al.  NELA-GT-2018: A Large Multi-Labelled News Dataset for The Study of Misinformation in News Articles , 2019, ICWSM.

[22]  Saif Mohammad,et al.  Obtaining Reliable Human Ratings of Valence, Arousal, and Dominance for 20,000 English Words , 2018, ACL.

[23]  Jason Weston,et al.  A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.

[24]  Carlo Strapparava,et al.  The Lie Detector: Explorations in the Automatic Recognition of Deceptive Language , 2009, ACL.

[25]  Dietram A. Scheufele,et al.  What's next for science communication? Promising directions and lingering distractions. , 2009, American journal of botany.

[26]  M. Gentzkow,et al.  Social Media and Fake News in the 2016 Election , 2017 .

[27]  Darsh J. Shah,et al.  The Limitations of Stylometry for Detecting Machine-Generated Fake News , 2019, CL.

[28]  Eunsol Choi,et al.  Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking , 2017, EMNLP.

[29]  Yejin Choi,et al.  ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning , 2019, AAAI.

[30]  Lei Zheng,et al.  Texygen: A Benchmarking Platform for Text Generation Models , 2018, SIGIR.

[31]  Noah A. Smith,et al.  The Media Frames Corpus: Annotations of Frames Across Issues , 2015, ACL.

[32]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[33]  Thomas C. Toppino,et al.  Frequency and the Conference of Referential Validity. , 1977 .

[34]  Yelena Mejova,et al.  Fake Cures: User-centric Modeling of Health Misinformation in Social Media , 2018 .

[35]  Verónica Pérez-Rosas,et al.  Towards Automatic Detection of Misinformation in Online Medical Videos , 2019, ICMI.

[36]  Stuart Hall,et al.  Encoding and Decoding in the television discourse , 1973 .

[37]  Preslav Nakov,et al.  Fine-Grained Analysis of Propaganda in News Article , 2019, EMNLP.

[38]  Sinan Aral,et al.  The spread of true and false news online , 2018, Science.

[39]  Ali Farhadi,et al.  Defending Against Neural Fake News , 2019, NeurIPS.

[40]  A. Leiserowitz,et al.  Support for climate policy and societal action are linked to perceptions about scientific agreement , 2011 .

[41]  Lysandre Debut,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[42]  Sameer Singh,et al.  COVIDLies: Detecting COVID-19 Misinformation on Social Media , 2020, NLP4COVID@EMNLP.

[43]  Matthew Louis Mauriello,et al.  Fake News vs Satire: A Dataset and Analysis , 2018, WebSci.

[44]  Yejin Choi,et al.  Event2Mind: Commonsense Inference on Events, Intents, and Reactions , 2018, ACL.

[45]  Claire Cardie,et al.  Finding Deceptive Opinion Spam by Any Stretch of the Imagination , 2011, ACL.

[46]  Fake news threatens a climate literate world , 2017, Nature communications.

[47]  M. Anne Britt,et al.  A Reasoned Approach to Dealing With Fake News , 2019, Policy Insights from the Behavioral and Brain Sciences.

[48]  Jeff Z. Pan,et al.  Content Based Fake News Detection Using Knowledge Graphs , 2018, SEMWEB.

[49]  R Likert,et al.  A TECHNIQUE FOR THE MEASUREMENT OF ATTITUDE SCALES , 1932 .

[50]  Quoc V. Le,et al.  Sequence to Sequence Learning with Neural Networks , 2014, NIPS.

[51]  Frank Hutter,et al.  Decoupled Weight Decay Regularization , 2017, ICLR.

[52]  C. Fillmore FRAME SEMANTICS AND THE NATURE OF LANGUAGE * , 1976 .

[53]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.

[54]  Paolo Rosso,et al.  Stance Detection in Fake News A Combined Feature Representation , 2018 .