Political audience diversity and news reliability in algorithmic ranking

Newsfeed algorithms frequently amplify misinformation and other low-quality content. How can social media platforms more effectively promote reliable information? Existing approaches are difficult to scale and vulnerable to manipulation. In this paper, we propose using the political diversity of a website's audience as a quality signal. Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 US residents, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards. We then incorporate audience diversity into a standard collaborative filtering framework and show that our improved algorithm increases the trustworthiness of websites suggested to users-especially those who most frequently consume misinformation-while keeping recommendations relevant. These findings suggest that partisan audience diversity is a valuable signal of higher journalistic standards that should be incorporated into algorithmic ranking decisions.

[1]  John Riedl,et al.  GroupLens: an open architecture for collaborative filtering of netnews , 1994, CSCW '94.

[2]  Bradley N. Miller,et al.  GroupLens: applying collaborative filtering to Usenet news , 1997, CACM.

[3]  William Bialek,et al.  Entropy and Inference, Revisited , 2001, NIPS.

[4]  Christos Faloutsos,et al.  Identifying Web Browsing Trends and Patterns , 2001, Computer.

[5]  John Riedl,et al.  Shilling recommender systems for fun and profit , 2004, WWW '04.

[6]  Lu Hong,et al.  Groups of diverse problem solvers can outperform groups of high-ability problem solvers. , 2004, Proceedings of the National Academy of Sciences of the United States of America.

[7]  Sean M. McNee,et al.  Improving recommendation lists through topic diversification , 2005, WWW '05.

[8]  Jennifer Golbeck,et al.  Computing and Applying Trust in Web-based Social Networks , 2005 .

[9]  Santo Fortunato,et al.  Scale-free network growth by ranking. , 2006, Physical review letters.

[10]  Matthew J. Salganik,et al.  Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market , 2006, Science.

[11]  Luca de Alfaro,et al.  A content-driven reputation system for the wikipedia , 2007, WWW '07.

[12]  Matthew J. Lindberg,et al.  Feeling validated versus being correct: a meta-analysis of selective exposure to information. , 2009, Psychological bulletin.

[13]  Sean A. Munson,et al.  Presenting diverse political opinions: how and how much , 2010, CHI.

[14]  Andrei Z. Broder,et al.  Anatomy of the long tail: ordinary people with extraordinary tastes , 2010, WSDM '10.

[15]  Huseyin Polat,et al.  Shilling attacks against recommender systems: a comprehensive survey , 2014, Artificial Intelligence Review.

[16]  Sean A. Munson,et al.  Encouraging Reading of Diverse Political Viewpoints with a Browser Widget , 2013, ICWSM.

[17]  Ponnurangam Kumaraguru,et al.  TweetCred: Real-Time Credibility Assessment of Content on Twitter , 2014, SocInfo.

[18]  Kristina Lerman,et al.  Disentangling the Effects of Social Signals , 2014, Hum. Comput..

[19]  Sibel Adali,et al.  A Survey on Trust Modeling , 2015, ACM Comput. Surv..

[20]  Lada A. Adamic,et al.  Exposure to ideologically diverse news and opinion on Facebook , 2015, Science.

[21]  Justin M. Rao,et al.  Filter Bubbles, Echo Chambers, and Online News Consumption , 2016 .

[22]  Filippo Menczer,et al.  Online Human-Bot Interactions: Detection, Estimation, and Characterization , 2017, ICWSM.

[23]  M. Gentzkow,et al.  Social Media and Fake News in the 2016 Election , 2017 .

[24]  Eunsol Choi,et al.  Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking , 2017, EMNLP.

[25]  Laura A. Granka,et al.  Accurately Interpreting Clickthrough Data as Implicit Feedback , 2017 .

[26]  Krishna P. Gummadi,et al.  Media Bias Monitor: Quantifying Biases of Social Media News Outlets at Large-Scale , 2018, ICWSM.

[27]  Giovanni Luca Ciampaglia,et al.  The spread of low-credibility content by social bots , 2017, Nature Communications.

[28]  Emilio Ferrara,et al.  Bots increase exposure to negative and inflammatory content in online social systems , 2018, Proceedings of the National Academy of Sciences.

[29]  Miriam J. Metzger,et al.  The science of fake news , 2018, Science.

[30]  Filippo Menczer,et al.  How algorithmic popularity bias hinders or promotes quality , 2017, Scientific Reports.

[31]  Sinan Aral,et al.  The spread of true and false news online , 2018, Science.

[32]  Christo Wilson,et al.  Linguistic Signals under Misinformation and Fact-Checking , 2018, Proc. ACM Hum. Comput. Interact..

[33]  David R. Karger,et al.  A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles , 2018, WWW.

[34]  Michael Macy,et al.  Opinion cascades and the unpredictability of partisan polarization , 2019, Science Advances.

[35]  D. Lazer,et al.  Fake news on Twitter during the 2016 U.S. presidential election , 2019, Science.

[36]  Filippo Menczer,et al.  Quantifying Biases in Online Information Exposure , 2018, J. Assoc. Inf. Sci. Technol..

[37]  David G. Rand,et al.  Fighting misinformation on social media using crowdsourced judgments of news source quality , 2018, Proceedings of the National Academy of Sciences.

[38]  Joshua A. Tucker,et al.  Less than you think: Prevalence and predictors of fake news dissemination on Facebook , 2019, Science Advances.

[39]  Filippo Menczer,et al.  Arming the public with artificial intelligence to counter social bots , 2019, Human Behavior and Emerging Technologies.

[40]  Feng Shi,et al.  The wisdom of polarized crowds , 2017, Nature Human Behaviour.

[41]  Fabrizio Germano,et al.  The few-get-richer: a surprising consequence of popularity-based rankings? , 2019, WWW.

[42]  Sameer Patil,et al.  Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation , 2020, Harvard Kennedy School Misinformation Review.

[43]  A. Flammini,et al.  Detection of Novel Social Bots by Ensembles of Specialized Classifiers , 2020, CIKM.

[44]  David Rothschild,et al.  Evaluating the fake news problem at the scale of the information ecosystem , 2020, Science Advances.

[45]  B. Nyhan,et al.  Exposure to untrustworthy websites in the 2016 U.S. election , 2020, Nature Human Behaviour.

[46]  Wen Chen,et al.  Neutral Bots Reveal Political Bias on Social Media , 2020, ArXiv.

[47]  Yotam Shmargad,et al.  Sorting the News: How Ranking by Popularity Polarizes Our Politics , 2020 .

[48]  Filippo Menczer,et al.  Prevalence of Low-Credibility Information on Twitter During the COVID-19 Outbreak , 2020, ICWSM Workshops.

[49]  Filippo Menczer,et al.  Scalable and Generalizable Social Bot Detection through Data Selection , 2019, AAAI.

[50]  Cong Yu,et al.  Factoring Fact-Checks: Structured Information Extraction from Fact-Checking Articles , 2020, WWW.

[51]  A. Guess (Almost) Everything in Moderation: New Evidence on Americans' Online Media Diets , 2021 .

[52]  Kai-Cheng Yang,et al.  Neutral bots probe political bias on social media , 2020, Nature Communications.