Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization

The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.

[1]  Peter Ferdinand,et al.  The Internet, democracy and democratization , 2000 .

[2]  Ariadna Matamoros Fernández,et al.  Hate Speech and Covert Discrimination on Social Media: Monitoring the Facebook Pages of Extreme-Right Political Parties in Spain , 2016 .

[3]  Virgílio A. F. Almeida,et al.  Auditing radicalization pathways on YouTube , 2019, FAT*.

[4]  Angela J. Nagle Kill All Normies: Online Culture Wars from 4chan and Tumblr to Trump and the Alt-Right , 2017 .

[5]  Michael Runcieman YouTube, the Great Radicalizer - The New York Times , 2018 .

[6]  Diana Rieger,et al.  Counter-messages as Prevention or Promotion of Extremism?! The Potential Role of YouTube , 2018, Journal of Communication.

[7]  Ponnurangam Kumaraguru,et al.  Mining YouTube to Discover Extremist Videos, Users and Hidden Communities , 2010, AIRS.

[8]  D. Cicchetti Guidelines, Criteria, and Rules of Thumb for Evaluating Normed and Standardized Assessment Instruments in Psychology. , 1994 .

[9]  J. M. Kayany Contexts of uninhibited online behavior: flaming in social newsgroups on Usenet , 1998 .

[10]  Imran Awan Cyber-Extremism: Isis and the Power of Social Media , 2017, Society.

[11]  Catherine Blaya Cyberhate: A review and content analysis of intervention strategies , 2019, Aggression and Violent Behavior.

[12]  Ashish Sureka,et al.  Spider and the Flies : Focused Crawling on Tumblr to Detect Hate Promoting Communities , 2016, ArXiv.

[13]  Lee Knuttila,et al.  User unknown: 4chan, anonymity and contingency , 2011, First Monday.

[14]  Martin J. Riedl,et al.  Book review: Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media , 2019, New Media Soc..

[15]  Sandeep Kumar Singh,et al.  Metadata based multi-labelling of YouTube videos , 2017, 2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence.

[16]  Li Wei,et al.  Recommending what video to watch next: a multitask ranking system , 2019, RecSys.

[17]  Nitin Agarwal,et al.  Analyzing Disinformation and Crowd Manipulation Tactics on YouTube , 2018, 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).

[18]  Brian A. Nosek,et al.  A meta-analysis of procedures to change implicit measures. , 2019, Journal of personality and social psychology.

[19]  Markus Wagner,et al.  One Bias Fits All? Three Types of Media Bias and Their Effects on Party Preferences , 2017, Commun. Res..

[20]  Virgílio A. F. Almeida,et al.  Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination , 2018, WebSci.

[21]  Matthew Leighton Williams,et al.  Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making , 2015 .

[22]  Ria Verleur,et al.  Flaming on YouTube , 2010, Comput. Hum. Behav..

[23]  Carolyn Penstein Rosé,et al.  Perceptions of Censorship and Moderation Bias in Political Debate Forums , 2018, ICWSM.

[24]  David Eppstein,et al.  Force-Directed Graph Drawing Using Social Gravity and Scaling , 2012, GD.

[25]  Jacob Groshek,et al.  Social media and personal attacks: A comparative perspective on co-creation and political advertising in presidential campaigns on YouTube , 2012, First Monday.

[26]  Luke Munn,et al.  Alt-right pipeline: Individual journeys to extremism online , 2019, First Monday.

[27]  Krishna P. Gummadi,et al.  Media Bias Monitor: Quantifying Biases of Social Media News Outlets at Large-Scale , 2018, ICWSM.

[28]  Hawoong Jeong,et al.  Statistical properties of sampled networks. , 2005, Physical review. E, Statistical, nonlinear, and soft matter physics.

[29]  Swati Agarwal,et al.  Topic-Specific YouTube Crawling to Detect Online Radicalization , 2015, DNIS.

[30]  Hal Berghel,et al.  The Online Trolling Ecosystem , 2018, Computer.

[31]  A. Greenwald,et al.  Measuring individual differences in implicit cognition: the implicit association test. , 1998, Journal of personality and social psychology.

[32]  Jacob Eisenstein,et al.  You Can't Stay Here , 2017 .

[33]  Virginie André,et al.  ‘Neojihadism’ 1 and YouTube: Patani Militant Propaganda Dissemination and Radicalization , 2012 .

[34]  Bryan Pfaffenberger If I Want it, It's OK: Usenet and the (Outer) Limits of Free Speech , 1996, Inf. Soc..

[35]  Shakeel Ahmad,et al.  Sentiment Analysis on YouTube: A Brief Survey , 2015, ArXiv.