Was this review helpful to you?: it depends! context and voting patterns in online content

When a website hosting user-generated content asks users a straightforward question - "Was this content helpful?" with one "Yes" and one "No" button as the two possible answers - one might expect to get a straightforward answer. In this paper, we explore how users respond to this question and find that their responses are not quite straightforward after all. Using data from Amazon product reviews, we present evidence that users do not make absolute, independent voting decisions based on individual review quality alone. Rather, whether users vote at all, as well as the polarity of their vote for any given review, depends on the context in which they view it - reviews receive a larger overall number of votes when they are 'misranked', and the polarity of votes becomes more positive/negative when the review is ranked lower/higher than it deserves. We distill these empirical findings into a new probabilistic model of rating behavior that includes the dependence of rating decisions on context. Understanding and formally modeling voting behavior is crucial for designing learning mechanisms and algorithms for review ranking, and we conjecture that many of our findings also apply to user behavior in other online content-rating settings.

[1]  John Riedl,et al.  The quest for quality tags , 2007, GROUP.

[2]  Zhu Zhang,et al.  Utility scoring of product reviews , 2006, CIKM '06.

[3]  Gilad Mishne,et al.  Finding high-quality content in social media , 2008, WSDM '08.

[4]  Yue Lu Exploiting Social Context for Review Quality Prediction , 2010 .

[5]  Georgios Askalidis,et al.  A Theoretical Analysis of Crowdsourced Content Curation , 2012 .

[6]  Jon M. Kleinberg,et al.  WWW 2009 MADRID! Track: Data Mining / Session: Opinions How Opinions are Received by Online Communities: A Case Study on Amazon.com Helpfulness Votes , 2022 .

[7]  Ming Zhou,et al.  Low-Quality Product Review Detection in Opinion Summarization , 2007, EMNLP.

[8]  Filip Radlinski,et al.  Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search , 2007, TOIS.

[9]  R. Preston McAfee,et al.  Incentivizing high-quality user-generated content , 2011, WWW.

[10]  Xiaohui Yu,et al.  Modeling and Predicting the Helpfulness of Online Reviews , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[11]  Sheizaf Rafaeli,et al.  Predictors of answer quality in online Q&A sites , 2008, CHI.

[12]  David Schuff,et al.  What Makes a Helpful Review? A Study of Customer Reviews on Amazon.com , 2010 .

[13]  Makoto Nakayama,et al.  An Exploratory Study: "Blind-Testing" Consumers How They Rate Helpfulness of Online Reviews , 2012, CONF-IRM.

[14]  Fang Wu,et al.  How Public Opinion Forms , 2008, WINE.

[15]  Chun Liu,et al.  Social Influence Bias : A Randomized Experiment , 2014 .

[16]  Bing Liu,et al.  Review spam detection , 2007, WWW '07.

[17]  Soo-Min Kim,et al.  Automatically Assessing Review Helpfulness , 2006, EMNLP.

[18]  Wolfgang Nejdl,et al.  How useful are your comments?: analyzing and predicting youtube comments and comment ratings , 2010, WWW '10.