Understanding user behavior in online feedback reporting

Online reviews have become increasingly popular as a way to judge the quality of various products and services. Previous work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult. In this paper, we investigate underlying factors that influence user behavior when reporting feedback. We look at two sources of information besides numerical ratings: linguistic evidence from the textual comment accompanying a review, and patterns in the time sequence of reports. We first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature. Second, we show that a user's rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews. Both give us a less noisy way to produce rating estimates and reveal the reasons behind user bias.Our hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website.

[1]  Xin Li,et al.  Self-selection, slipping, salvaging, slacking, and stoning: the impacts of negative feedback at eBay , 2005, EC '05.

[2]  R. Teas,et al.  Expectations, Performance Evaluation, and Consumers’ Perceptions of Quality , 1993 .

[3]  Oren Etzioni,et al.  Extracting Product Features and Opinions from Reviews , 2005, HLT.

[4]  J. Wooders,et al.  Reputation in Auctions: Theory, and Evidence from Ebay , 2006 .

[5]  Bing Liu,et al.  Mining and summarizing customer reviews , 2004, KDD.

[6]  Paul A. Pavlou,et al.  Can online reviews reveal a product's true quality?: empirical findings and analytical modeling of Online word-of-mouth communication , 2006, EC '06.

[7]  Chrysanthos Dellarocas,et al.  Exploring the value of online product reviews in forecasting sales: The case of motion pictures , 2007 .

[8]  Panagiotis G. Ipeirotis,et al.  The Dimensions of Reputation in Electronic Markets , 2009 .

[9]  Bo Pang,et al.  Thumbs up? Sentiment Classification using Machine Learning Techniques , 2002, EMNLP.

[10]  Anindya Ghose,et al.  A Multi-Level Examination of the Impact of Social Identities on Economic Transactions in Electronic Markets , 2006 .

[11]  Anat R. Admati,et al.  Research Paper Series Graduate School of Business Stanford University Noisytalk.com: Broadcasting Opinions in a Noisy Environment Broadcasting Opinions in a Noisy Environment , 2022 .

[12]  Arun Sundararajan,et al.  Reputation premiums in electronic peer-to-peer markets: analyzing textual feedback and network structure , 2005, P2PECON '05.

[13]  M. Melnik,et al.  Does a Seller's Ecommerce Reputation Matter? Evidence from Ebay Auctions , 2003 .

[14]  Angelika Dimoka,et al.  The Nature and Role of Feedback Text Comments in Online Marketplaces: Implications for Trust Building, Price Premiums, and Seller Differentiation , 2006, Inf. Syst. Res..

[15]  Vibhu O. Mittal,et al.  Comparative Experiments on Sentiment Classification for Online Product Reviews , 2006, AAAI.

[16]  S. McIntyre,et al.  Return on Reputation in Online Auction Markets , 2001 .

[17]  David M. Pennock,et al.  Mining the peanut gallery: opinion extraction and semantic classification of product reviews , 2003, WWW '03.

[18]  A. Parasuraman,et al.  SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. , 1988 .

[19]  R. Olshavsky,et al.  Consumer Expectations, Product Performance, and Perceived Product Quality , 1972 .

[20]  A. Parasuraman,et al.  A Conceptual Model of Service Quality and Its Implications for Future Research , 1985 .