The spread of misinformation by social bots

The massive spread of digital misinformation has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of misinformation online and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a disproportionate role in spreading and repeating misinformation. Automated accounts are particularly active in amplifying misinformation in the very early spreading moments, before a claim goes viral. Bots target users with many followers through replies and mentions, and may disguise their geographic locations. Humans are vulnerable to this manipulation, retweeting bots who post misinformation. Successful sources of false and misleading claims are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.

[1]  Miriam J. Metzger,et al.  The science of fake news , 2018, Science.

[2]  Filippo Menczer,et al.  How algorithmic popularity bias hinders or promotes quality , 2017, Scientific Reports.

[3]  Arkaitz Zubiaga,et al.  Detection and Resolution of Rumours in Social Media , 2017, ACM Comput. Surv..

[4]  B. Morton Fake news. , 2018, Marine pollution bulletin.

[5]  Ullrich K. H. Ecker,et al.  Beyond Misinformation: Understanding and coping with the post-truth era , 2017 .

[6]  Emilio Ferrara,et al.  Disinformation and Social Bot Operations in the Run Up to the 2017 French Presidential Election , 2017, First Monday.

[7]  G. Johar,et al.  Perceived social presence reduces fact-checking , 2017, Proceedings of the National Academy of Sciences.

[8]  W. Nuland,et al.  Information operations and Facebook , 2017 .

[9]  Filippo Menczer,et al.  Early detection of promoted campaigns on social media , 2017, EPJ Data Science.

[10]  Filippo Menczer,et al.  Online Human-Bot Interactions: Detection, Estimation, and Characterization , 2017, ICWSM.

[11]  Alireza Sahami Shirazi,et al.  Limited individual attention and online virality of low-quality information , 2017, Nature Human Behaviour.

[12]  Samuel C. Woolley,et al.  Computational propaganda worldwide: Executive summary , 2017 .

[13]  Emilio Ferrara,et al.  Social Bots Distort the 2016 US Presidential Election Online Discussion , 2016, First Monday.

[14]  Peter J Hotez,et al.  Texas and Its Measles Epidemics , 2016, PLoS medicine.

[15]  Jeffrey A. Gottfried,et al.  News use across social media platforms 2016 , 2016 .

[16]  Filippo Menczer,et al.  Hoaxy: A Platform for Tracking Online Misinformation , 2016, WWW.

[17]  Filippo Menczer,et al.  BotOrNot: A System to Evaluate Social Bots , 2016, WWW.

[18]  Amos Azaria,et al.  The DARPA Twitter Bot Challenge , 2016, Computer.

[19]  G. Caldarelli,et al.  The spreading of misinformation online , 2016, Proceedings of the National Academy of Sciences.

[20]  Filippo Menczer,et al.  The rise of social bots , 2014, Commun. ACM.

[21]  Xiaomo Liu,et al.  Real-time Rumor Debunking on Twitter , 2015, CIKM.

[22]  Panagiotis Takis Metaxas,et al.  Using TwitterTrails.com to Investigate Rumor Propagation , 2015, CSCW Companion.

[23]  Filippo Menczer,et al.  Measuring Online Social Bubbles , 2015, 1502.07162.

[24]  Cong Yu,et al.  Data In, Fact Out: Automated Monitoring of Facts by FactWatcher , 2014, Proc. VLDB Endow..

[25]  P. Resnick,et al.  RumorLens: A System for Analyzing the Impact of Rumors and Corrections in Social Media , 2014 .

[26]  Matthew Levendusky Why Do Partisan Media Polarize Viewers , 2013 .

[27]  Dan M. Kahan,et al.  Ideology, motivated reasoning, and cognitive reflection , 2013, Judgment and Decision Making.

[28]  Kristina Lerman,et al.  How Visibility and Divided Attention Constrain Social Contagion , 2012, 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing.

[29]  Filippo Menczer,et al.  Partisan asymmetries in online political activity , 2012, EPJ Data Science.

[30]  Eli Pariser,et al.  The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think , 2012 .

[31]  Jacob Ratkiewicz,et al.  Detecting and Tracking Political Abuse in Social Media , 2011, ICWSM.

[32]  Jacob Ratkiewicz,et al.  Political Polarization on Twitter , 2011, ICWSM.

[33]  N. Stroud Niche News: The Politics of News Choice , 2011 .

[34]  Barbara Poblete,et al.  Information credibility on twitter , 2011, WWW.

[35]  Jacob Ratkiewicz,et al.  Truthy: mapping the spread of astroturf in microblog streams , 2010, WWW.

[36]  Eni Mustafaraj,et al.  From Obscurity to Prominence in Minutes: Political Speech and Real-Time Search , 2010 .

[37]  Cass R. Sunstein,et al.  Going to Extremes: How Like Minds Unite and Divide , 2009 .

[38]  Ciro Cattuto,et al.  Social spam detection , 2009, AIRWeb '09.

[39]  Markus Jakobsson,et al.  Social phishing , 2007, CACM.

[40]  Matthew J. Salganik,et al.  Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market , 2006, Science.

[41]  John Langford,et al.  CAPTCHA: Using Hard AI Problems for Security , 2003, EUROCRYPT.

[42]  Piet Schenelaars,et al.  Public opinion , 2013, BDJ.