暂无分享,去创建一个
[1] Tomas Mikolov,et al. Bag of Tricks for Efficient Text Classification , 2016, EACL.
[2] Anatoliy A. Gruzd,et al. Examining Sentiments and Popularity of Pro- and Anti-Vaccination Videos on YouTube , 2017, SMSociety.
[3] Paul Covington,et al. Deep Neural Networks for YouTube Recommendations , 2016, RecSys.
[4] H. Sebastian Seung,et al. Learning the parts of objects by non-negative matrix factorization , 1999, Nature.
[5] Michael Runcieman. YouTube, the Great Radicalizer - The New York Times , 2018 .
[6] Michael Runcieman. How YouTube’s A.I. boosts alternative facts , 2018 .
[7] Guido Caldarelli,et al. Users Polarization on Facebook and Youtube , 2016, PloS one.
[8] Tanushree Mitra,et al. Conspiracies Online: User Discussions in a Conspiracy Community Following Dramatic Events , 2018, ICWSM.
[9] Jürgen Buder,et al. Reducing confirmation bias and evaluation bias: When are preference-inconsistent recommendations effective - and when not? , 2012, Comput. Hum. Behav..
[10] Li Wei,et al. Recommending what video to watch next: a multitask ranking system , 2019, RecSys.
[11] Nitin Agarwal,et al. Analyzing Disinformation and Crowd Manipulation Tactics on YouTube , 2018, 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).
[12] Anna Zaitsev,et al. Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization , 2019, First Monday.
[13] Derek Greene,et al. Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems , 2015 .
[14] Joachim Allgaier. Science and Environmental Communication on YouTube: Strategically Distorted Communications in Online Videos on Climate Change and Climate Engineering , 2019, Front. Commun..
[15] Karen Spärck Jones. A statistical interpretation of term specificity and its application in retrieval , 2021, J. Documentation.
[16] Virgílio A. F. Almeida,et al. Auditing radicalization pathways on YouTube , 2019, FAT*.
[17] Jean-Loup Guillaume,et al. Fast unfolding of communities in large networks , 2008, 0803.0476.