This Is Not What We Ordered: Exploring Why Biased Search Result Rankings Affect User Attitudes on Debated Topics

In web search on debated topics, algorithmic and cognitive biases strongly influence how users consume and process information. Recent research has shown that this can lead to a search engine manipulation effect (SEME): when search result rankings are biased towards a particular viewpoint, users tend to adopt this favored viewpoint. To better understand the mechanisms underlying SEME, we present a pre-registered, 5 x 3 factorial user study investigating whether order effects (i.e., users adopting the viewpoint pertaining to higher-ranked documents) can cause SEME. For five different debated topics, we evaluated attitude change after exposing participants with mild pre-existing attitudes to search results that were overall viewpoint-balanced but reflected one of three levels of algorithmic ranking bias. We found that attitude change did not differ across levels of ranking bias and did not vary based on individual user differences. Our results thus suggest that order effects may not be an underlying mechanism of SEME. Exploratory analyses lend support to the presence of exposure effects (i.e., users adopting the majority viewpoint among the results they examine) as a contributing factor to users' attitude change. We discuss how our findings can inform the design of user bias mitigation strategies.

[1]  S. Dumais,et al.  Promoting Civil Discourse Through Search Engine Diversity , 2014 .

[2]  K. Stanovich,et al.  Reasoning independently of prior belief and individual differences in actively open-minded thinking. , 1997 .

[3]  L. Ross,et al.  The Bias Blind Spot: Perceptions of Bias in Self Versus Others , 2002 .

[4]  Derek D. Rucker,et al.  Attitude certainty: Antecedents, consequences, and new directions , 2018 .

[5]  Julian Unkel,et al.  Ranking versus reputation: perception and effects of search result credibility , 2017, Behav. Inf. Technol..

[6]  G. Norman Likert scales, levels of measurement and the “laws” of statistics , 2010, Advances in health sciences education : theory and practice.

[7]  Ahmed Allam,et al.  Manipulating Google’s Knowledge Graph Box to Counter Biased Information Processing During an Online Search on Vaccination: Application of a Technological Debiasing Strategy , 2016, Journal of medical Internet research.

[8]  Huaiyu Zhu On Information and Sufficiency , 1997 .

[9]  Mark D. Smucker,et al.  A Think-Aloud Study to Understand Factors Affecting Online Health Search , 2020, CHIIR.

[10]  Benjamin K. Johnson,et al.  Confirmation Bias in Online Searches: Impacts of Selective Exposure Before an Election on Political Attitude Strength and Shifts , 2015, J. Comput. Mediat. Commun..

[11]  Leif Azzopardi Cognitive Biases in Search: A Review and Reflection of Cognitive Biases in Information Retrieval , 2021, CHIIR.

[12]  Jeffrey N. Rouder,et al.  Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications , 2017, Psychonomic Bulletin & Review.

[13]  Abolfazl Asudeh,et al.  A Nutritional Label for Rankings , 2018, SIGMOD Conference.

[14]  M. Lee,et al.  Bayesian Cognitive Modeling: A Practical Course , 2014 .

[15]  Richard L. Miller,et al.  Mere Exposure, Psychological Reactance and Attitude Change , 1976 .

[16]  Thorsten Joachims,et al.  In Google We Trust: Users' Decisions on Rank, Position, and Relevance , 2007, J. Comput. Mediat. Commun..

[17]  R. Hogarth,et al.  Order effects in belief updating: The belief-adjustment model , 1992, Cognitive Psychology.

[18]  Udo Kruschwitz,et al.  Scalable Visualisation of Sentiment and Stance , 2018, LREC.

[19]  Piotr Sapiezynski,et al.  Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists , 2019, WWW.

[20]  S. Goodman,et al.  Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations , 2016, European Journal of Epidemiology.

[21]  R. Zajonc Attitudinal effects of mere exposure. , 1968 .

[22]  Charles L. A. Clarke,et al.  The Positive and Negative Influence of Search Results on People's Decisions about the Efficacy of Medical Treatments , 2017, ICTIR.

[23]  Ilana Ritov,et al.  The role of actively open-minded thinking in information acquisition, accuracy, and calibration , 2013, Judgment and Decision Making.

[24]  E. Wagenmakers,et al.  A Tutorial on Conducting and Interpreting a Bayesian ANOVA in JASP , 2020, L’Année psychologique.

[25]  Sreenivas Gollapudi,et al.  Diversifying search results , 2009, WSDM '09.

[26]  L. M. M.-T. Theory of Probability , 1929, Nature.

[27]  Ronald E. Robertson,et al.  The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections , 2015, Proceedings of the National Academy of Sciences.

[28]  Yvonne Kammerer,et al.  Chapter 10 How Search Engine Users Evaluate and Select Web Search Results: The Impact of the Search Engine Interface on Credibility Assessments , 2012 .

[29]  Barry Smyth,et al.  Are people biased in their use of search engines? , 2008, CACM.

[30]  Farooq Ahmad,et al.  A survey on search results diversification techniques , 2015, Neural Computing and Applications.

[31]  Zi Huang,et al.  Try This Instead: Personalized and Interpretable Substitute Recommendation , 2020, SIGIR.

[32]  Kimberly A. Neuendorf,et al.  Reliability for Content Analysis , 2010 .

[33]  Alamir Novin,et al.  Making Sense of Conflicting Science Information: Exploring Bias in the Search Engine Result Page , 2017, CHIIR.

[34]  Mark T. Keane,et al.  Modeling Result-List Searching in the World Wide Web: The Role of Relevance Topologies and Trust Bias , 2006 .

[35]  Ryen W. White Beliefs and biases in web search , 2013, SIGIR.

[36]  Ryen W. White,et al.  Content Bias in Online Health Search , 2014, TWEB.

[37]  Charles L. A. Clarke,et al.  Novelty and diversity in information retrieval evaluation , 2008, SIGIR '08.

[38]  Ricardo Baeza-Yates,et al.  Bias on the web , 2018, Commun. ACM.

[39]  Bart P. Knijnenburg,et al.  Explaining the user experience of recommender systems , 2012, User Modeling and User-Adapted Interaction.

[40]  Edgar Erdfelder,et al.  G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences , 2007, Behavior research methods.

[41]  Nava Tintarev,et al.  Assessing Viewpoint Diversity in Search Results Using Ranking Fairness Metrics , 2021, SIGKDD Explor..

[42]  D. Boyd,et al.  Data Voids: Where Missing Data Can Easily Be Exploited , 2018 .

[43]  Lu Zhang,et al.  On Discrimination Discovery and Removal in Ranked Data using Causal Graph , 2018, KDD.

[44]  Krishna P. Gummadi,et al.  Search bias quantification: investigating political bias in social media and web search , 2018, Information Retrieval Journal.

[45]  Thorsten Joachims,et al.  Controlling Fairness and Bias in Dynamic Learning-to-Rank , 2020, SIGIR.

[46]  Abolfazl Asudeh,et al.  Designing Fair Ranking Schemes , 2017, SIGMOD Conference.

[47]  W WhiteRyen Belief dynamics in web search , 2014 .

[48]  David G. Rand,et al.  Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning , 2019, Cognition.

[49]  Stephen Griffin,et al.  Attitude change. , 2001, Nursing standard (Royal College of Nursing (Great Britain) : 1987).

[50]  Noel Carroll,et al.  In Search We Trust: Exploring How Search Engines are Shaping Society , 2014, Int. J. Knowl. Soc. Res..

[51]  Paul A. Cairns,et al.  A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form , 2018, Int. J. Hum. Comput. Stud..

[52]  Krishna P. Gummadi,et al.  Equity of Attention: Amortizing Individual Fairness in Rankings , 2018, SIGIR.

[53]  Cornelius Puschmann,et al.  Beyond the Bubble: Assessing the Diversity of Political Search Results , 2018, Digital Journalism.

[54]  Ricardo Baeza-Yates,et al.  FA*IR: A Fair Top-k Ranking Algorithm , 2017, CIKM.

[55]  Thorsten Joachims,et al.  Accurately Interpreting Clickthrough Data as Implicit Feedback , 2017 .

[56]  Ruoyuan Gao,et al.  Toward creating a fairer ranking in search engine results , 2020, Inf. Process. Manag..

[57]  David Lazer,et al.  Suppressing the Search Engine Manipulation Effect (SEME) , 2017, Proc. ACM Hum. Comput. Interact..

[58]  Nisheeth K. Vishnoi,et al.  Ranking with Fairness Constraints , 2017, ICALP.

[59]  Satoshi Shimada,et al.  Can Disputed Topic Suggestion Enhance User Consideration of Information Credibility in Web Search? , 2016, HT.

[60]  Julia Stoyanovich,et al.  Measuring Fairness in Ranked Outputs , 2016, SSDBM.

[61]  Peter Johannes Schulz,et al.  The Impact of Search Engine Selection and Sorting Criteria on Vaccination Beliefs and Attitudes: Two Experiments Manipulating Google Output , 2014, Journal of medical Internet research.