By the Crowd and for the Crowd: Perceived Utility and Willingness to Contribute to Trustworthiness Indicators on Social Media

This study explores how people perceive the potential utility of trustworthiness indicators and how willing they are to consider contributing to them as a way to combat the problem of misinformation and disinformation on social media. Analysis of qualitative and quantitative data from the survey (N=376) indicates that a majority of respondents believe trustworthiness indicators would be valuable as they can reduce uncertainty and provide guidance on how to interact with content. However, perceptions of how and when these indicators can provide value vary widely in detail. A majority of respondents are also willing to contribute to trustworthiness indicators on social media to some extent due to their sense of duty and personal expertise in information verification practices but are very wary of the effort or burden it would place on them. Respondents who did not want to use or contribute to trustworthiness indicators attributed it to their lack of faith in the concept of trustworthiness indicators stemming from perceived inherent and unsurmountable biases on social media. Together our findings highlight the complexity of designing, structuring and presenting trustworthiness indicators keeping in mind the diverse set of user attitudes and perceptions.

[1]  Ricky J. Sethi Crowdsourcing the Verification of Fake News and Alternative Facts , 2017, HT.

[2]  Dongwon Lee,et al.  Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation , 2019, WebSci.

[3]  Jonathon P. Schuldt,et al.  Does Green Mean Healthy? Nutrition Label Color Affects Perceptions of Healthfulness , 2013, Health communication.

[4]  Wai-Tat Fu,et al.  To Label or Not to Label , 2018, Proc. ACM Hum. Comput. Interact..

[5]  Suhang Wang,et al.  Fake News Detection on Social Media: A Data Mining Perspective , 2017, SKDD.

[6]  Gastón Ares,et al.  Nutrition warnings as front-of-pack labels: influence of design features on healthfulness perception and attentional capture , 2017, Public Health Nutrition.

[7]  KarahaliosKarrie,et al.  To Label or Not to Label , 2018 .

[8]  Ullrich K. H. Ecker,et al.  Misinformation and Its Correction , 2012, Psychological science in the public interest : a journal of the American Psychological Society.

[9]  Shilad Sen,et al.  Rating: how difficult is it? , 2011, RecSys '11.

[10]  Chen Xu,et al.  The dominant factor of social tags for users’ decision behavior on e‐commerce websites: Color or text , 2018, J. Assoc. Inf. Sci. Technol..

[11]  David Lazer,et al.  Searching for the Backfire Effect: Measurement and Design Considerations☆ , 2020, Journal of Applied Research in Memory and Cognition.

[12]  N. Jeddi,et al.  The Impact of Label Perception on the Consumer's Purchase Intention: An application on food products , 2010 .

[13]  David Hammond,et al.  The impact of cigarette pack design, descriptors, and warning labels on risk perception in the U.S. , 2011, American journal of preventive medicine.

[14]  Matthew Ricketson,et al.  Digital news report 2015 , 2015 .

[15]  Homero Gil de Zúñiga,et al.  Whose News? Whose Values? , 2013 .

[16]  Sebastian Tschiatschek,et al.  Fake News Detection in Social Networks via Crowd Signals , 2017, WWW.

[17]  Lingfang Li,et al.  Decision Making Using Rating Systems: When Scale Meets Binary , 2012, Decis. Sci..

[18]  Alan Borning,et al.  Supporting reflective public thought with considerit , 2012, CSCW.

[19]  Christo Wilson,et al.  Linguistic Signals under Misinformation and Fact-Checking , 2018, Proc. ACM Hum. Comput. Interact..

[20]  John Riedl,et al.  Is seeing believing?: how recommender system interfaces affect users' opinions , 2003, CHI '03.

[21]  Rasmus Kleis Nielsen,et al.  "News you don't believe": Audience perspectives on fake news , 2017 .

[22]  David G. Rand,et al.  The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings , 2019, Manag. Sci..

[23]  Kate Starbird,et al.  Disinformation as Collaborative Work , 2019, Proc. ACM Hum. Comput. Interact..

[24]  Warih Maharani,et al.  Discovering Users' Perceptions on Rating Visualizations , 2016 .

[25]  Stephan Lewandowsky,et al.  Explicit warnings reduce but do not eliminate the continued influence of misinformation , 2010, Memory & cognition.

[26]  Alan R. Dennis,et al.  Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense At All , 2019, MIS Q..

[27]  Jonathan A. Busam,et al.  Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media , 2020, Political Behavior.

[28]  B. Nyhan,et al.  When Corrections Fail: The Persistence of Political Misperceptions , 2010 .

[29]  Anselm L. Strauss,et al.  Basics of qualitative research : techniques and procedures for developing grounded theory , 1998 .

[30]  David G. Rand,et al.  Prior Exposure Increases Perceived Accuracy of Fake News , 2018, Journal of experimental psychology. General.

[31]  B. Nyhan,et al.  Exposure to untrustworthy websites in the 2016 U.S. election , 2020, Nature Human Behaviour.

[32]  Emily K. Vraga,et al.  In Related News, That Was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media , 2015 .

[33]  Kate Starbird,et al.  Assembling Strategic Narratives , 2018, Proc. ACM Hum. Comput. Interact..

[34]  Filippo Menczer,et al.  Hoaxy: A Platform for Tracking Online Misinformation , 2016, WWW.

[35]  Hendrik Heuer,et al.  Trust in news on social media , 2018, NordiCHI.