RecSys'17 Joint Workshop on Interfaces and Human Decision Making for Recommender Systems

As intelligent interactive systems, recommender systems focus on determining predictions that fit the wishes and needs of users. Still, a large majority of recommender systems research focuses on accuracy criteria and much less attention is paid to how users interact with the system, and in which way the user interface has an influence on the selection behavior of the users. Consequently, it is important to look beyond algorithms. The main goals of the IntRS workshop are to analyze the impact of user interfaces and interaction design, and to explore human interaction with recommender systems from a human decision making perspective. Methodologies for evaluating these aspects are also within the scope of the workshop.

[1]  Jeroen K. Vermunt,et al.  Two of a kind : Similarities between ranking and rating data in measuring work values , 2016 .

[2]  Alexander Felfernig,et al.  Counteracting Anchoring Effects in Group Decision Making , 2015, UMAP.

[3]  M. F. Luce,et al.  Constructive Consumer Choice Processes , 1998 .

[4]  Mark P. Graus,et al.  Understanding the role of latent feature diversification on choice difficulty and satisfaction , 2016, User Modeling and User-Adapted Interaction.

[5]  Thomas L. Saaty,et al.  Rank from comparisons and from ratings in the analytic hierarchy/network processes , 2006, Eur. J. Oper. Res..

[6]  Li Chen,et al.  Trust Building in Recommender Agents , 2016 .

[7]  John Riedl,et al.  Is seeing believing?: how recommender system interfaces affect users' opinions , 2003, CHI '03.

[8]  Klaus Krippendorff,et al.  Content Analysis: An Introduction to Its Methodology , 1980 .

[9]  Roddy Cowie,et al.  Real life emotions in French and English TV video clips: an integrated annotation protocol combining continuous and discrete approaches , 2006, LREC.

[10]  Roddy Cowie,et al.  FEELTRACE: an instrument for recording perceived emotion in real time , 2000 .

[11]  Alexander Felfernig,et al.  Minimization of decoy effects in recommender result sets , 2012, Web Intell. Agent Syst..

[12]  Pasquale Lops,et al.  Human Decision Making and Recommender Systems , 2013, TIIS.

[13]  E. Diener,et al.  Subjective well-being. The science of happiness and a proposal for a national index. , 2000, The American psychologist.

[14]  Klaus Krippendorff,et al.  Answering the Call for a Standard Reliability Measure for Coding Data , 2007 .

[15]  H. Simon,et al.  A Behavioral Model of Rational Choice , 1955 .

[16]  Mark P. Graus,et al.  Understanding choice overload in recommender systems , 2010, RecSys '10.

[17]  R. Downey,et al.  Rating the ratings: Assessing the psychometric quality of rating data , 1980 .

[18]  Roddy Cowie,et al.  What a neural net needs to know about emotion words , 1999 .

[19]  Bart P. Knijnenburg,et al.  Explaining the user experience of recommender systems , 2012, User Modeling and User-Adapted Interaction.

[20]  Mohammad Soleymani,et al.  Crowdsourcing for Affective Annotation of Video: Development of a Viewer-reported Boredom Corpus , 2010 .

[21]  Bo Pang,et al.  Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales , 2005, ACL.

[22]  Judith Masthoff,et al.  Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.

[23]  Li Chen,et al.  Human Decision Making and Recommender Systems , 2015, Recommender Systems Handbook.

[24]  Barry Smyth,et al.  PeerChooser: visual interactive recommendation , 2008, CHI.

[25]  Giovanni Semeraro,et al.  Personality and Emotions in Decision Making and Recommender Systems , 2014, DMRS.

[26]  Jon A. Krosnick,et al.  The Measurement of Values in Surveys: A Comparison of Ratings and Rankings , 1985 .