Between Overload and Indifference: Detection of Fake Accounts and Social Bots by Community Managers

In addition to the increased opportunities for citizens to participate in society, participative online journalistic platforms offer opportunities for the dissemination of online propaganda through fake accounts and social bots. Community managers are expected to separate real expressions of opinion from manipulated statements through fake accounts and social bots. However, little is known about the criteria by which managers make the distinction between “real” and “fake” users. The present study addresses this gap with a series of expert interviews. The results show that community managers have widespread experience with fake accounts, but they have difficulty assessing the degree of automation. The criteria by which an account is classified as “fake” can be described along a micro-meso-macro structure, whereby recourse to indicators at the macro level is barely widespread, but is instead partly stereotyped, where impression-forming processes at the micro and meso levels predominate. We discuss the results with a view to possible long-term consequences for collective participation.

[1]  Dan Mercea,et al.  The Brexit Botnet and User-Generated Hyperpartisan News , 2017 .

[2]  Tarleton Gillespie,et al.  HOSTING THE PUBLIC DISCOURSE, HOSTING THE PUBLIC , 2011 .

[3]  Philip N. Howard,et al.  Social media, revolution,and the rise of the political bot , 2016 .

[4]  Filippo Menczer,et al.  BotOrNot: A System to Evaluate Social Bots , 2016, WWW.

[5]  L. Frischlich,et al.  Comment Sections as Targets of Dark Participation? Journalists’ Evaluation and Moderation of Deviant User Comments , 2019, Journalism Studies.

[6]  Mor Naaman,et al.  Towards quality discourse in online news comments , 2011, CSCW.

[7]  Jochen Gläser,et al.  Experteninterviews und qualitative Inhaltsanalyse , 2010 .

[8]  Heike Trautmann,et al.  Social Bots: Human-Like by Means of Human Control? , 2017, Big Data.

[9]  B. A. Williams,et al.  Unchained reaction , 2000 .

[10]  François Heinderyckx,et al.  Gatekeeping Theory Redux , 2015 .

[11]  Iginio Gagliardone,et al.  Mechachal: Online Debates and Elections in Ethiopia - From Hate Speech to Engagement in Social Media , 2016 .

[12]  Martin Engelin,et al.  Troll detection : A comparative study in detecting troll farms on Twitter using cluster analysis , 2016 .

[13]  Thanh Tran,et al.  Uncovering Fake Likers in Online Social Networks , 2016, CIKM.

[14]  Tim P. Vos Revisiting Gatekeeping Theory During a Time of Transition , 2015 .

[15]  Jochen Gläser,et al.  Experteninterviews und qualitative Inhaltsanalyse als Instrumente rekonstruierender Untersuchungen. , 2010 .

[16]  Karmen Erjavec,et al.  “You Don't Understand, This is a New War!” Analysis of Hate Speech in News Web Sites' Comments , 2012 .