Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels

When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of their user base and activity on the new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.

[1]  A. Bruckman,et al.  Online Harassment and Content Moderation , 2018 .

[2]  Mehmet Fatih Çömlekçi Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media , 2019 .

[3]  Sarah T. Roberts,et al.  Behind the Screen , 2019 .

[4]  Gianluca Stringhini,et al.  The Evolution of the Manosphere across the Web , 2021, ICWSM.

[5]  Adrienne Massanari,et al.  #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures , 2017, New Media Soc..

[6]  Cliff Lampe,et al.  Classification and Its Consequences for Online Harassment , 2017, Proc. ACM Hum. Comput. Interact..

[7]  Jacob Eisenstein,et al.  You Can't Stay Here , 2017 .

[8]  Binny Mathew,et al.  Temporal effects of Unmoderated Hate speech in Gab , 2019, arXiv.org.

[9]  Derek Ruths,et al.  User Migration in Online Social Networks: A Case Study on Reddit During a Period of Community Unrest , 2016, ICWSM.

[10]  Jeremy Blackburn,et al.  The Pushshift Reddit Dataset , 2020, ICWSM.

[11]  Jure Leskovec,et al.  Antisocial Behavior in Online Discussion Communities , 2015, ICWSM.

[12]  Yejin Choi,et al.  The Risk of Racial Bias in Hate Speech Detection , 2019, ACL.

[13]  Animesh Mukherjee,et al.  Hate begets Hate: A Temporal Study of Hate Speech. , 2019 .

[14]  Derek Ruths,et al.  The Aftermath of Disbanding an Online Hateful Community , 2018, ArXiv.

[15]  R. Leahy,et al.  Hidden resilience and adaptive dynamics of the global online hate ecology , 2019, Nature.

[16]  Kevin Munger,et al.  Right-Wing YouTube: A Supply and Demand Perspective , 2020, The International Journal of Press/Politics.

[17]  Debbie Ging,et al.  Alphas, Betas, and Incels: Theorizing the Masculinities of the Manosphere , 2017 .

[18]  Rebecca Lewis Alternative influence: broadcasting the reactionary right on YouTube , 2018 .

[19]  M. Lilly 'The World is Not a Safe Place for Men': The Representational Politics of the Manosphere , 2016 .

[20]  Gianluca Stringhini,et al.  Measuring and Characterizing Hate Speech on News Websites , 2020, WebSci.

[21]  Gloria Mark,et al.  Detecting Potential Warning Behaviors of Ideological Radicalization in an Alt-Right Subreddit , 2019, ICWSM.

[22]  Jeremy Blackburn,et al.  From Pick-Up Artists to Incels: A Data-Driven Sketch of the Manosphere , 2020, ArXiv.

[23]  Porismita Borah,et al.  Does It Matter Where You Read the News Story? Interaction of Incivility and News Frames in the Political Blogosphere , 2014, Commun. Res..

[24]  Lisa Kaati,et al.  Detecting Linguistic Markers for Radical Violence in Social Media , 2014 .

[25]  C. Rosé,et al.  The Discourse of Online Content Moderation: Investigating Polarized User Responses to Changes in Reddit’s Quarantine Policy , 2019, Proceedings of the Third Workshop on Abusive Language Online.

[26]  Michael D. Rusk Bookmark This Site , 2001 .

[27]  Ezra Shapiro,et al.  Assessing the Threat of Incel Violence , 2020, Studies in Conflict & Terrorism.

[28]  Ingmar Weber,et al.  Automated Hate Speech Detection and the Problem of Offensive Language , 2017, ICWSM.

[29]  Virgílio A. F. Almeida,et al.  Characterizing and Detecting Hateful Users on Twitter , 2018, ICWSM.

[30]  Yulia Tsvetkov,et al.  Stress and Burnout in Open Source: Toward Finding, Understanding, and Mitigating Unhealthy Interactions , 2020, 2020 IEEE/ACM 42nd International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER).

[31]  Michael S. Bernstein,et al.  Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions , 2017, CSCW.

[32]  Virgílio A. F. Almeida,et al.  Digital Governance and the Tragedy of the Commons , 2020, IEEE Internet Computing.

[33]  Saiph Savage,et al.  Mobilizing the Trump Train: Understanding Collective Action in a Political Trolling Community , 2018, ICWSM.

[34]  Paul Resnick,et al.  Quick, Community-Specific Learning: How Distinctive Toxicity Norms Are Maintained in Political Subreddits , 2020, ICWSM.

[35]  J. Pennebaker,et al.  The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods , 2010 .

[36]  Cristian Danescu-Niculescu-Mizil,et al.  Conversations Gone Awry: Detecting Early Signs of Conversational Failure , 2018, ACL.

[37]  Seungyeop Han,et al.  Exploring Cyberbullying and Other Toxic Behavior in Team Competition Online Games , 2015, CHI.

[38]  Kevin Carillo,et al.  "The Dose Makes the Poison" - Exploring the Toxicity Phenomenon in Online Communities , 2016, ICIS.

[39]  Virgílio A. F. Almeida,et al.  Auditing radicalization pathways on YouTube , 2019, FAT*.