Downside management in recommender systems

In recommender systems, bad recommendations can lead to a net utility loss for both users and content providers. The downside (individual loss) management is a crucial and important problem, but has long been ignored. We propose a method to identify bad recommendations by modeling the users' latent preferences that are yet to be captured using a residual model, which can be applied independently on top of existing recommendation algorithms. We include two components in the residual utility: benefit and cost, which can be learned simultaneously from users' observed interactions with the recommender system. We further classify user behavior into fine-grained categories, based on which an efficient optimization algorithm to estimate the benefit and cost using Bayesian partial order is proposed. By accurately calculating the utility users obtained from recommendations based on the benefit-cost analysis, we can infer the optimal threshold to determine the downside portion of the recommender system. We validate the proposed method by experimenting with real-world datasets and demonstrate that it can help to prevent bad recommendations from showing.

[1]  N. Given,et al.  Predicting query performance on the web , 2010, SIGIR.

[2]  Mark Goadrich,et al.  The relationship between Precision-Recall and ROC curves , 2006, ICML.

[3]  Jian Wang,et al.  Opportunity model for e-commerce recommendation: right product; right time , 2013, SIGIR.

[4]  Gunnar Rätsch,et al.  Soft Margins for AdaBoost , 2001, Machine Learning.

[5]  W. Bruce Croft,et al.  Predicting query performance , 2002, SIGIR '02.

[6]  Elad Yom-Tov,et al.  Estimating the query difficulty for information retrieval , 2010, Synthesis Lectures on Information Concepts, Retrieval, and Services.

[7]  Patrick Seemann,et al.  Matrix Factorization Techniques for Recommender Systems , 2014 .

[8]  Yehuda Koren,et al.  The BellKor Solution to the Netflix Grand Prize , 2009 .

[9]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[10]  J. Bobadilla,et al.  Recommender systems survey , 2013, Knowl. Based Syst..

[11]  Jure Leskovec,et al.  From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews , 2013, WWW.

[12]  Ryen W. White,et al.  Predicting query performance using query, result, and user interaction features , 2010, RIAO.

[13]  H. Simon,et al.  Theories of Decision-Making in Economics and Behavioural Science , 1966 .

[14]  M. de Rijke,et al.  Using Coherence-Based Measures to Predict Query Difficulty , 2008, ECIR.

[15]  Soussan Djamasbi,et al.  Do ads matter? An exploration of web search behavior, visual hierarchy, and search engine results pages , 2013, 2013 46th Hawaii International Conference on System Sciences.

[16]  Yi Zhang,et al.  Is it time for a career switch? , 2013, WWW.

[17]  Michael J. Pazzani,et al.  A Framework for Collaborative, Content-Based and Demographic Filtering , 1999, Artificial Intelligence Review.

[18]  John Riedl,et al.  Recommender systems in e-commerce , 1999, EC '99.

[19]  James Bennett,et al.  The Netflix Prize , 2007 .

[20]  Taghi M. Khoshgoftaar,et al.  A Survey of Collaborative Filtering Techniques , 2009, Adv. Artif. Intell..

[21]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[22]  Marco Laumanns,et al.  SPEA2: Improving the strength pareto evolutionary algorithm , 2001 .