Convergence analysis for an online recommendation system

Online recommendation systems use votes from experts or other users to recommend objects to customers. We propose a recommendation algorithm that uses an average weight updating rule and prove its convergence to the best expert and derive an upper bound on its loss. Often times, recommendation algorithms make assumptions that do not hold in practice such as requiring a large number of the good objects, presence of experts with the exact same taste as the user receiving the recommendation, or experts who vote on all or majority of objects. Our algorithm relaxes these assumptions. Besides theoretical performance guarantees, our simulation results show that the proposed algorithm outperforms current state-of-the-art recommendation algorithm, Dsybil.

[1]  David Haussler,et al.  Tight worst-case loss bounds for predicting with expert advice , 1994, EuroCOLT.

[2]  Yoram Singer,et al.  Using and combining predictors that specialize , 1997, STOC '97.

[3]  Vladimir Vovk,et al.  Aggregating strategies , 1990, COLT '90.

[4]  Neri Merhav,et al.  Universal Prediction , 1998, IEEE Trans. Inf. Theory.

[5]  Robert D. Kleinberg,et al.  Regret bounds for sleeping experts and bandits , 2010, Machine Learning.

[6]  Gábor Lugosi,et al.  Prediction, learning, and games , 2006 .

[7]  Feng Xiao,et al.  DSybil: Optimal Sybil-Resistance for Recommendation Systems , 2009, 2009 30th IEEE Symposium on Security and Privacy.

[8]  David Haussler,et al.  How to use expert advice , 1993, STOC.

[9]  Manfred K. Warmuth,et al.  The Weighted Majority Algorithm , 1994, Inf. Comput..

[10]  Yishay Mansour,et al.  From External to Internal Regret , 2005, J. Mach. Learn. Res..