Decision making strategies differ in the presence of collaborative explanations: two conjoint studies

Rating-based summary statistics are ubiquitous in e-commerce, and often are crucial components in personalized recommendation mechanisms. Especially visual rating summarizations have been identified as important means to explain, why an item is presented or proposed to an user. Largely left unexplored, however, is the issue to what extent the descriptives of these rating summary statistics influence decision making of the online consumer. Therefore, we conducted a series of two conjoint experiments to explore how different summarizations of rating distributions (i.e., in the form of number of ratings, mean, variance, skewness, bimodality, or origin of the ratings) impact users' decision making. In a first study with over 200 participants, we identified that users are primarily guided by the mean and the number of ratings, and - to lesser degree - by the variance and origin of a rating. When probing the maximizing behavioral tendencies of our participants, other sensitivities regarding the summary of rating distributions became apparent. We thus instrumented a follow-up eye-tracking study to explore in more detail, how the choices of participants vary in terms of their decision making strategies. This second round with over 40 additional participants supported our hypothesis that users, who usually experience higher decision difficulty, follow compensatory decision strategies, and focus more on the decisions they make. We conclude by outlining how the results of these studies can guide algorithm development, and counterbalance presumable biases in implicit user feedback.

[1]  P. Todd,et al.  Can There Ever Be Too Many Options? A Meta-Analytic Review of Choice Overload , 2010 .

[2]  Kun-Pyo Lee,et al.  The Elders Preference for Skeuomorphism as App Icon Style , 2015, CHI Extended Abstracts.

[3]  Kyung Hyan Yoo,et al.  Persuasive Recommender Systems - Conceptual Background and Implications , 2012, Springer Briefs in Electrical and Computer Engineering.

[4]  Raymond J. Mooney,et al.  Explaining Recommendations: Satisfaction vs. Promotion , 2005 .

[5]  Lise Getoor,et al.  User Preferences for Hybrid Explanations , 2017, RecSys.

[6]  Deborah Marshall,et al.  Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force. , 2013, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[7]  Barry Schwartz,et al.  The Maximization Paradox : The costs of seeking alternatives , 2009 .

[8]  Dietmar Jannach,et al.  A systematic review and taxonomy of explanations in decision support and recommender systems , 2017, User Modeling and User-Adapted Interaction.

[9]  John R. Hauser,et al.  Conjoint Analysis, Related Modeling, and Applications , 2004 .

[10]  Panagiotis Symeonidis,et al.  Exploring Users' Perception of Collaborative Explanation Styles , 2018, 2018 IEEE 20th Conference on Business Informatics (CBI).

[11]  Dietmar Jannach,et al.  Item Familiarity as a Possible Confounding Factor in User-Centric Recommender Systems Evaluation , 2015, i-com.

[12]  Lisa A Prosser,et al.  Statistical Methods for the Analysis of Discrete-Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force. , 2016, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[13]  Markus Zanker,et al.  Decision Biases in Recommender Systems , 2015 .

[14]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.

[15]  Jerry Wind,et al.  Courtyard by Marriott: Designing a Hotel Facility with Consumer-Based Marketing Models , 1989 .

[16]  Yehuda Koren,et al.  Matrix Factorization Techniques for Recommender Systems , 2009, Computer.

[17]  Bin Gu,et al.  Do online reviews matter? - An empirical investigation of panel data , 2008, Decis. Support Syst..

[18]  Li Chen,et al.  Human Decision Making and Recommender Systems , 2015, Recommender Systems Handbook.

[19]  Panagiotis Symeonidis,et al.  Exploring Users' Perception of Rating Summary Statistics , 2018, UMAP.

[20]  Rick Dale,et al.  Good things peak in pairs: a note on the bimodality coefficient , 2013, Front. Psychol..

[21]  M. Zanker,et al.  An empirical study on the persuasiveness of fact-based explanations for recommender systems , 2014 .

[22]  Markus Zanker,et al.  Multi-criteria Ratings for Recommender Systems: An Empirical Analysis in the Tourism Domain , 2012, EC-Web.

[23]  Gergana Y. Nenkov,et al.  A short form of the Maximization Scale: Factor structure, reliability and validity studies , 2008, Judgment and Decision Making.

[24]  L. Salmaso,et al.  Permutation tests for complex data : theory, applications and software , 2010 .

[25]  Judith Masthoff,et al.  Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.

[26]  W. Greene,et al.  计量经济分析 = Econometric analysis , 2009 .

[27]  Luigi Salmaso,et al.  Permutation Tests for Complex Data , 2010 .

[28]  John W. Payne,et al.  Task complexity and contingent processing in decision making: An information search and protocol analysis☆ , 1976 .

[29]  Peter Brusilovsky,et al.  User modeling and user adapted interaction , 2001 .

[30]  Maarten J. IJzerman,et al.  Statistical Methods for the Analysis of Discrete Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force. , 2016, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[31]  Dietmar Jannach,et al.  Investigating the Decision-Making Behavior of Maximizers and Satisficers in the Presence of Recommendations , 2018, UMAP.

[32]  B. Schwartz,et al.  Doing Better but Feeling Worse , 2006, Psychological science.

[33]  Matthias Brand,et al.  Choosing a Physician on Social Media: Comments and Ratings of Users are More Important than the Qualification of a Physician , 2018, Int. J. Hum. Comput. Interact..

[34]  Alan Kennedy,et al.  Book Review: Eye Tracking: A Comprehensive Guide to Methods and Measures , 2016, Quarterly journal of experimental psychology.

[35]  Olfa Nasraoui,et al.  Using Explainability for Constrained Matrix Factorization , 2017, RecSys.

[36]  V. Rao,et al.  A General Consumer Preference Model for Experience Products: Application to Internet Recommendation Services , 2012 .

[37]  Jeff Sauro,et al.  Average task times in usability tests: what to report? , 2010, CHI 2010.

[38]  Vithala R. Rao,et al.  Developments in Conjoint Analysis , 2008 .

[39]  Alexander Felfernig,et al.  Minimization of decoy effects in recommender result sets , 2012, Web Intell. Agent Syst..

[40]  B. Schwartz,et al.  Maximizing versus satisficing: happiness is a matter of choice , 2002 .

[41]  Bart P. Knijnenburg,et al.  Each to his own: how different users call for different interaction methods in recommender systems , 2011, RecSys '11.

[42]  Ursina Teuscher,et al.  Time flies when you maximize - maximizers and satisficers perceive time differently when making decisions. , 2013, Acta psychologica.

[43]  Bart de Langhe,et al.  Navigating by the Stars: Investigating the Actual and Perceived Validity of Online User Ratings , 2016 .

[44]  Richard P. Eibach,et al.  Failing to commit: Maximizers avoid commitment in a way that contributes to reduced satisfaction , 2012 .

[45]  V. Rao,et al.  An Interdisciplinary Review of Research in Conjoint Analysis: Recent Developments and Directions for Future Research , 2015 .

[46]  Paul A. Pavlou,et al.  Overcoming the J-shaped distribution of product reviews , 2009, CACM.

[47]  B. Schwartz,et al.  Maximizing Versus Satisficing : Happiness Is a Matter of Choice , 2002 .

[48]  H. Simon,et al.  A Behavioral Model of Rational Choice , 1955 .

[49]  Luigi Salmaso,et al.  The importance of landscape in wine quality perception: An integrated approach using choice-based conjoint analysis and combination-based permutation tests , 2010 .

[50]  J. Louviere,et al.  Discrete Choice Experiments Are Not Conjoint Analysis , 2010 .

[51]  Markus Zanker,et al.  Decision Making Based on Bimodal Rating Summary Statistics - An Eye-Tracking Study of Hotels , 2018, ENTER.

[52]  Joel Huber,et al.  A General Method for Constructing Efficient Choice Designs , 1996 .

[53]  Gerhard Friedrich,et al.  A Taxonomy for Generating Explanations in Recommender Systems , 2011, AI Mag..

[54]  Barry Schwartz,et al.  On the meaning and measurement of maximization , 2016, Judgment and Decision Making.

[55]  John Riedl,et al.  Is seeing believing?: how recommender system interfaces affect users' opinions , 2003, CHI '03.

[56]  Vithala R. Rao,et al.  Choice Based Conjoint Studies: Design and Analysis , 2014 .