Mining Trust Values from Recommendation Errors

Increasing availability of information has furthered the need for recommender systems across a variety of domains. These systems are designed to tailor each user's information space to suit their particular information needs. Collaborative filtering is a successful and popular technique for producing recommendations based on similarities in users' tastes and opinions. Our work focusses on these similarities and the fact that current techniques for defining which users contribute to recommendation are in need of improvement. In this paper we propose the use of trustworthiness as an improvement to this situation. In particular, we define and empirically test a technique for eliciting trust values for each producer of a recommendation based on that user's history of contributions to recommendations. We compute a recommendation range to present to a target user. This is done by leveraging under/overestimate errors in users' past contributions in the recommendation process. We present three different models to compute this range. Our evaluation shows how this trust-based technique can be easily incorporated into a standard collaborative filtering algorithm and we define a fair comparison in which our technique outperforms a benchmark algorithm in predictive accuracy. We aim to show that the presentation of absolute rating predictions to users is more likely to reduce user trust in the recommendation system than presentation of a range of rating predictions. To evaluate the trust benefits resulting from the transparency of our recommendation range techniques, we carry out user-satisfaction trials on BoozerChoozer, a pub recommendation system. Our user-satisfaction results show that the recommendation range techniques perform up to twice as well as the benchmark.

[1]  John Riedl,et al.  GroupLens: an open architecture for collaborative filtering of netnews , 1994, CSCW '94.

[2]  Barry Smyth,et al.  Trust in recommender systems , 2005, IUI.

[3]  Barry Smyth,et al.  Eliciting Trust Values from Recommendation Errors , 2005, FLAIRS Conference.

[4]  Paolo Avesani,et al.  A trust-enhanced recommender system application: Moleskiing , 2005, SAC '05.

[5]  Josep Lluís de la Rosa i Esteva,et al.  Developing trust in recommender agents , 2002, AAMAS '02.

[6]  Rashmi R. Sinha,et al.  The role of transparency in recommender systems , 2002, CHI Extended Abstracts.

[7]  Neil J. Hurley,et al.  Promoting Recommendations: An Attack on Collaborative Filtering , 2002, DEXA.

[8]  Barry Smyth,et al.  A personalized television listings service , 2000, CACM.

[9]  Bradley N. Miller,et al.  GroupLens: applying collaborative filtering to Usenet news , 1997, CACM.

[10]  John Riedl,et al.  Is seeing believing?: how recommender system interfaces affect users' opinions , 2003, CHI '03.

[11]  Stephen Marsh,et al.  Formalising Trust as a Computational Concept , 1994 .

[12]  Stephen Hailes,et al.  A distributed trust model , 1998, NSPW '97.

[13]  Bobby Bhattacharjee,et al.  Using Trust in Recommender Systems: An Experimental Analysis , 2004, iTrust.

[14]  Raymond J. Mooney,et al.  Content-boosted collaborative filtering for improved recommendations , 2002, AAAI/IAAI.

[15]  Neil J. Hurley,et al.  Collaborative recommendation: A robustness analysis , 2004, TOIT.

[16]  John Riedl,et al.  Shilling recommender systems for fun and profit , 2004, WWW '04.

[17]  Chrysanthos Dellarocas,et al.  Building Trust On-Line: The Design of Reliable Reputation Reporting : Mechanisms for Online Trading Communities , 2001 .

[18]  Barry Smyth,et al.  Experiments in dynamic critiquing , 2005, IUI.

[19]  Barry Smyth,et al.  Case-Based User Profiling for Content Personalisation , 2000, AH.

[20]  Kristian J. Hammond,et al.  The FindMe Approach to Assisted Browsing , 1997, IEEE Expert.