How Fake News Affect Trust in the Output of a Machine Learning System for News Curation

People are increasingly consuming news curated by machine learning (ML) systems. Motivated by studies on algorithmic bias, this paper explores which recommendations of an algorithmic news curation system users trust and how this trust is affected by untrustworthy news stories like fake news. In a study with 82 vocational school students with a background in IT, we found that users are able to provide trust ratings that distinguish trustworthy recommendations of quality news stories from untrustworthy recommendations. However, a single untrustworthy news story combined with four trustworthy news stories is rated similarly as five trustworthy news stories. The results could be a first indication that untrustworthy news stories benefit from appearing in a trustworthy context. The results also show the limitations of users' abilities to rate the recommendations of a news curation system. We discuss the implications of this for the user experience of interactive machine learning systems.

[1]  Maya Cakmak,et al.  Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..

[2]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[3]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[4]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[5]  John Riedl,et al.  Is seeing believing?: how recommender system interfaces affect users' opinions , 2003, CHI '03.

[6]  Weng-Keen Wong,et al.  Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.

[7]  Annika Wærn,et al.  Towards Algorithmic Experience: Initial Efforts for Social Media Contexts , 2018, CHI.

[8]  Colin Camerer,et al.  Not So Different After All: A Cross-Discipline View Of Trust , 1998 .

[9]  Thomas G. Dietterich,et al.  Interacting meaningfully with machine learning systems: Three experiments , 2009, Int. J. Hum. Comput. Stud..

[10]  J. Rotter A new scale for the measurement of interpersonal trust. , 1967, Journal of personality.

[11]  Dan Conway,et al.  How to Recommend?: User Trust Factors in Movie Recommender Systems , 2017, IUI.

[12]  Jun Zhao,et al.  'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.

[13]  J. G. Holmes,et al.  Trust in close relationships. , 1985 .

[14]  Kate Smith-Miles Exploratory data analysis , 2011 .

[15]  M. Gentzkow,et al.  Social Media and Fake News in the 2016 Election , 2017 .

[16]  Karrie Karahalios,et al.  Communicating Algorithmic Process in Online Behavioral Advertising , 2018, CHI.

[17]  N. Newman,et al.  Reuters Institute Digital News Report 2019 , 2019 .

[18]  M. Deutsch,et al.  Trust, trustworthiness, and the F scale. , 1960, Journal of abnormal and social psychology.

[19]  Karrie Karahalios,et al.  "Be Careful; Things Can Be Worse than They Appear": Understanding Biased Algorithms and Users' Behavior Around Them in Rating Platforms , 2017, ICWSM.

[20]  B. Ripley,et al.  Robust Statistics , 2018, Encyclopedia of Mathematical Geosciences.

[21]  Allison Woodruff,et al.  A Qualitative Exploration of Perceptions of Algorithmic Fairness , 2018, CHI.

[22]  Been Kim,et al.  Interactive and interpretable machine learning models for human machine collaboration , 2015 .

[23]  Eric Horvitz,et al.  Reflections on Challenges and Promises of Mixed-Initiative Interaction , 2007, AI Mag..

[24]  Rasoul Karimi,et al.  Active Learning for Recommender Systems , 2015, KI - Künstliche Intelligenz.

[25]  David G. Rand,et al.  Crowdsourcing Judgments of News Source Quality , 2018 .

[26]  Johan Farkas,et al.  Algorithms, Interfaces, and the Circulation of Information: Interrogating the Epistemological Challenges of Facebook , 2016 .

[27]  Masooda Bashir,et al.  Trust in Automation , 2015, Hum. Factors.

[28]  Stephen Marsh,et al.  Formalising Trust as a Computational Concept , 1994 .

[29]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[30]  Li Chen,et al.  Trust building with explanation interfaces , 2006, IUI '06.

[31]  Tim Reeskens,et al.  Cross-cultural measurement equivalence of generalized trust. Evidence from the European Social Survey (2002 and 2004) , 2007 .

[32]  S. Fuchs Trust and Power , 2019, Contemporary Sociology: A Journal of Reviews.

[33]  Floyd H. Allport,et al.  Wartime rumors of waste and special privilege: why some people believe them. , 1945 .

[34]  Meredith Ringel Morris,et al.  Understanding Blind People's Experiences with Computer-Generated Captions of Social Media Images , 2017, CHI.

[35]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[36]  N. Moray,et al.  Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. , 1996, Ergonomics.

[37]  Karrie Karahalios,et al.  A path to understanding the effects of algorithm awareness , 2014, CHI Extended Abstracts.

[38]  Joe Tullio,et al.  How it works: a field study of non-technical users interacting with an intelligent system , 2007, CHI.

[39]  Emilee J. Rader,et al.  Explanations as Mechanisms for Supporting Algorithmic Transparency , 2018, CHI.

[40]  Miriam J. Metzger,et al.  The science of fake news , 2018, Science.

[41]  H. B. Mann,et al.  On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other , 1947 .

[42]  Henriette Cramer,et al.  Awareness, training and trust in interaction with adaptive spam filters , 2009, CHI.

[43]  Jon Atle Gulla,et al.  Data Sets and News Recommendation , 2014, UMAP Workshops.

[44]  Bobby Bhattacharjee,et al.  Using Trust in Recommender Systems: An Experimental Analysis , 2004, iTrust.

[45]  R. H. Knapp,et al.  A PSYCHOLOGY OF RUMOR , 1944 .

[46]  John Riedl,et al.  GroupLens: an open architecture for collaborative filtering of netnews , 1994, CSCW '94.

[47]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[48]  Jon Atle Gulla,et al.  The Adressa dataset for news recommendation , 2017, WI.

[49]  Barry Smyth,et al.  Trust in recommender systems , 2005, IUI.

[50]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.

[51]  Rebecca Gray,et al.  Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed , 2015, CHI.