Algorithmic content moderation: Technical and political challenges in the automation of platform governance
暂无分享,去创建一个
Reuben Binns | Christian Katzenbach | Robert Gorwa | R. Binns | Robert Gorwa | Christian Katzenbach | Reuben Binns
[1] Lucas Dixon,et al. Ex Machina: Personal Attacks Seen at Scale , 2016, WWW.
[2] Ingmar Weber,et al. Understanding Abuse: A Typology of Abusive Language Detection Subtasks , 2017, ALW@ACL.
[3] Ricardo Baeza-Yates,et al. FA*IR: A Fair Top-k Ranking Algorithm , 2017, CIKM.
[4] N. Elkin-Koren,et al. Behind the Scenes of Online Copyright Enforcement: Empirical Evidence on Notice & Takedown , 2018 .
[5] Michael Veale,et al. Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation , 2017, SocInfo.
[6] Ronak Patel. First World Problems: A Fair Use Analysis of Internet Memes , 2013 .
[7] Brendan T. O'Connor,et al. Demographic Dialectal Variation in Social Media: A Case Study of African-American English , 2016, EMNLP.
[8] James Grimmelmann,et al. The Virtues of Moderation , 2015 .
[9] Jenna Burrell,et al. How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .
[10] Nicolas Suzor,et al. Lawless: the secret rules that govern our digital lives , 2018 .
[11] M. Soha,et al. Monetizing a Meme: YouTube, Content ID, and the Harlem Shake , 2016 .
[12] R. Stuart Geiger,et al. Bots, bespoke, code and the materiality of software platforms , 2014 .
[13] Ying Chen,et al. Detecting Offensive Language in Social Media to Protect Adolescent Online Safety , 2012, 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing.
[14] Christopher G. Harris,et al. A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.
[15] Michael Wiegand,et al. A Survey on Hate Speech Detection using Natural Language Processing , 2017, SocialNLP@EACL.
[16] Julie E. Cohen,et al. Fair Use Infrastructure for Copyright Management Systems , 2000 .
[17] Jennifer M. Urban,et al. Notice and Takedown in Everyday Practice , 2016 .
[18] M. Kearns,et al. Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.
[19] M. C. Elish,et al. Situating methods in the magic of Big Data and AI , 2018 .
[20] Natasha Duarte,et al. Mixed Messages? The Limits of Automated Social Media Content Analysis , 2018, FAT.
[21] Nicole Immorlica,et al. Locality-sensitive hashing scheme based on p-stable distributions , 2004, SCG '04.
[22] Geoff Kaufman,et al. Moderator engagement and community development in the age of algorithms , 2019, New Media Soc..
[23] Robert Gorwa,et al. Democratic Transparency in the Platform Society , 2020 .
[24] Kate Klonick,et al. Facebook v. Sullivan: Building Constitutional Law for Online Speech , 2019, SSRN Electronic Journal.
[25] Jiao Yu-hua,et al. An Overview of Perceptual Hashing , 2008 .
[26] Marcel Broersma,et al. Witnessing in the new memory ecology: Memory construction of the Syrian conflict on YouTube , 2017, New Media Soc..
[27] Mike Ananny,et al. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..
[28] Alexandra Chouldechova,et al. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions , 2018, FAT.
[29] Sarah T. Roberts,et al. Behind the Screen , 2019 .
[30] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[31] Sarah T. Roberts. Digital detritus: 'Error' and the logic of opacity in social media content moderation , 2018, First Monday.
[32] R. Stuart Geiger. The Lives of Bots , 2018, ArXiv.
[33] Julie E. Cohen,et al. Fair Use Infrastructure for Rights Management Systems , 2004 .
[34] Sarah Myers West,et al. What do we mean when we talk about transparency? Towards meaningful transparency in commercial content moderation , 2019 .
[35] Robert Gorwa. The platform governance triangle: conceptualising the informal regulation of online content , 2019, Internet Policy Rev..
[36] Reuben Binns,et al. Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.
[37] Robert Gorwa,et al. What is platform governance? , 2019, Information, Communication & Society.
[38] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[39] Radha Poovendran,et al. Deceiving Google's Perspective API Built for Detecting Toxic Comments , 2017, ArXiv.
[40] Niva Elkin-Koren,et al. Accountability in Algorithmic Copyright Enforcement , 2016 .
[41] A. Hoffmann. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse , 2019, Information, Communication & Society.
[42] David Stuart,et al. SPAM: A Shadow History of the Internet , 2014 .
[43] Michael D. Ekstrand,et al. Exploring author gender in book rating and recommendation , 2018, User Modeling and User-Adapted Interaction.
[44] Kate Crawford,et al. Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics , 2016 .
[45] Paul Resnick,et al. Slash(dot) and burn: distributed moderation in a large online conversation space , 2004, CHI.
[46] Andrew D. Selbst,et al. Big Data's Disparate Impact , 2016 .
[47] Ellen Spertus,et al. Smokey: Automatic Recognition of Hostile Messages , 1997, AAAI/IAAI.
[48] Henry Lieberman,et al. Common Sense Reasoning for Detection, Prevention, and Mitigation of Cyberbullying , 2012, TIIS.
[49] K. Erickson,et al. “This Video is Unavailable”: Analyzing Copyright Takedown of User-Generated Content on YouTube , 2018 .