Why Amazon's Ratings Might Mislead You: The Story of Herding Effects

Our society is increasingly relying on digitalized, aggregated opinions of individuals to make decisions (e.g., product recommendation based on collective ratings). One key requirement of harnessing this "wisdom of crowd" is the independency of individuals' opinions; yet, in real settings, collective opinions are rarely simple aggregations of independent minds. Recent experimental studies document that disclosing prior collective ratings distorts individuals' decision making as well as their perceptions of quality and value, highlighting a fundamental discrepancy between our perceived values from collective ratings and products' intrinsic values. Here we present a mechanistic framework to describe herding effects of prior collective ratings on subsequent individual decision making. Using large-scale longitudinal customer rating datasets, we find that our method successfully captures the dynamics of ratings growth, helping us separate social influence bias from inherent values. Leveraging the proposed framework, we quantitatively characterize the herding effects existing in product rating systems and promote strategies to untangle manipulations and social biases.

[1]  Chris Chatfield,et al.  Introduction to Statistical Time Series. , 1976 .

[2]  D. Helbing,et al.  How social influence can undermine the wisdom of crowd effect , 2011, Proceedings of the National Academy of Sciences.

[3]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[4]  Matthew J. Salganik,et al.  Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market , 2006, Science.

[5]  John C. Platt,et al.  Learning from the Wisdom of Crowds by Minimax Entropy , 2012, NIPS.

[6]  Yehuda Koren,et al.  Collaborative filtering with temporal dynamics , 2009, KDD.

[7]  Mudhakar Srivatsa,et al.  Microscopic Social Influence , 2012, SDM.

[8]  Yevgeniy Vorobeychik,et al.  Behavioral dynamics and influence in networked coloring and consensus , 2010, Proceedings of the National Academy of Sciences.

[9]  Yehuda Koren,et al.  Factor in the neighbors: Scalable and accurate collaborative filtering , 2010, TKDD.

[10]  Abhinandan Das,et al.  Google news personalization: scalable online collaborative filtering , 2007, WWW '07.

[11]  Thomas M. Cover,et al.  Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing) , 2006 .

[12]  Amélie Marian,et al.  Beyond the Stars: Improving Rating Predictions using Review Text Content , 2009, WebDB.

[13]  F. Galton One Vote, One Value , 1907, Nature.

[14]  Abhimanyu Das,et al.  Debiasing social wisdom , 2013, KDD.

[15]  Jure Leskovec,et al.  Hidden factors and hidden topics: understanding rating dimensions with review text , 2013, RecSys.

[16]  Bing Liu,et al.  Mining and summarizing customer reviews , 2004, KDD.

[17]  A. P. Dawid,et al.  Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm , 1979 .

[18]  Chun Liu,et al.  Social Influence Bias : A Randomized Experiment , 2014 .

[19]  Patrick Seemann,et al.  Matrix Factorization Techniques for Recommender Systems , 2014 .

[20]  Christian P. Robert,et al.  Monte Carlo Statistical Methods (Springer Texts in Statistics) , 2005 .

[21]  Panagiotis G. Ipeirotis,et al.  Get another label? improving data quality and data mining using multiple, noisy labelers , 2008, KDD.

[22]  Xifeng Yan,et al.  Synthetic review spamming and defense , 2013, WWW.

[23]  Fei Wang,et al.  Quantifying herding effects in crowd wisdom , 2014, KDD.

[24]  Amélie Marian,et al.  Improving the quality of predictions using textual information in online user reviews , 2013, Inf. Syst..