The success of a recommender system is not only determined by smart algorithm design, but also by the quality of user data and user appreciation. User data are collected by the feedback system that acts as the communication link between the recommender and the user. The proper collection of feedback is thus a key component of the recommender system. If designed incorrectly, worthless or too little feedback may be collected, leading to low-quality recommendations. There is however little knowledge on the influence that design of feedback mechanisms has on the willingness for users to give feedback. In this paper we study user behavior towards four different explicit feedback mechanisms that are most commonly used in online systems, 5-star rating (static and dynamic) and thumbs up/down (static and dynamic). We integrated these systems into a popular (10,000 visitors a day) cultural events website and monitored the interaction of users. In 6 months over 8000 ratings were collected and analyzed. Current results show that the distinct feedback systems resulted in different user interaction patterns. Finding the right technique to encourage user interaction may be one of the next big challenges recommender systems have to face.
[1]
John Zimmerman,et al.
A Multi-Agent TV Recommender
,
2001
.
[2]
Nava Tintarev,et al.
Rate it again: increasing recommendation accuracy by user re-rating
,
2009,
RecSys '09.
[3]
Robin D. Burke,et al.
Hybrid Recommender Systems: Survey and Experiments
,
2002,
User Modeling and User-Adapted Interaction.
[4]
Martin Szomszor,et al.
Comparison of implicit and explicit feedback from an online music recommendation service
,
2010,
HetRec '10.
[5]
John Riedl,et al.
Is seeing believing?: how recommender system interfaces affect users' opinions
,
2003,
CHI '03.
[6]
Xingshe Zhou,et al.
TV3P: an adaptive assistant for personalized TV
,
2004,
IEEE Transactions on Consumer Electronics.
[7]
Bogdan Vintila,et al.
A New Algorithm for Self-adapting Web Interfaces
,
2010,
WEBIST.