Learning Community-Based Preferences via Dirichlet Process Mixtures of Gaussian Processes

Bayesian approaches to preference learning using Gaussian Processes (GPs) are attractive due to their ability to explicitly model uncertainty in users' latent utility functions; unfortunately existing techniques have cubic time complexity in the number of users, which renders this approach intractable for collaborative preference learning over a large user base. Exploiting the observation that user populations often decompose into communities of shared preferences, we model user preferences as an infinite Dirichlet Process (DP) mixture of communities and learn (a) the expected number of preference communities represented in the data, (b) a GP-based preference model over items tailored to each community, and (c) the mixture weights representing each user's fraction of community membership. This results in a learning and inference process that scales linearly in the number of users rather than cubicly and additionally provides the ability to analyze individual community preferences and their associated members. We evaluate our approach on a variety of preference data sources including Amazon Mechanical Turk showing that our method is more scalable and as accurate as previous GP-based preference learning work.

[1]  Tom Minka,et al.  Expectation Propagation for approximate Bayesian inference , 2001, UAI.

[2]  L. Jones Measurement of Values , 1959, Nature.

[3]  Carl E. Rasmussen,et al.  Infinite Mixtures of Gaussian Process Experts , 2001, NIPS.

[4]  Scott Sanner,et al.  Gaussian Process Preference Elicitation , 2010, NIPS.

[5]  Simon Osindero,et al.  An Alternative Infinite Mixture Of Gaussian Process Experts , 2005, NIPS.

[6]  Eyke Hllermeier,et al.  Preference Learning , 2010 .

[7]  Tom Heskes,et al.  Multi-task preference learning with an application to hearing aid personalization , 2010, Neurocomputing.

[8]  Nando de Freitas,et al.  Active Preference Learning with Discrete Choice Data , 2007, NIPS.

[9]  Wei Chu,et al.  Extensions of Gaussian processes for ranking: semi-supervised and active learning , 2005 .

[10]  Daphne Koller,et al.  Utilities as Random Variables: Density Estimation and Structure Discovery , 2000, UAI.

[11]  Toshihiro Kamishima,et al.  Nantonac collaborative filtering: recommendation based on order responses , 2003, KDD '03.

[12]  Radford M. Neal Markov Chain Sampling Methods for Dirichlet Process Mixture Models , 2000 .

[13]  John C. Platt,et al.  Learning a Gaussian Process Prior for Automatically Generating Music Playlists , 2001, NIPS.

[14]  Thorsten Joachims,et al.  Fast Active Exploration for Link-Based Preference Learning Using Gaussian Processes , 2010, ECML/PKDD.

[15]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[16]  A. Copeland Review: John von Neumann and Oskar Morgenstern, Theory of games and economic behavior , 1945 .

[17]  Gerald L Detter RICH GET RICHER , 1999 .

[18]  T. Snijders,et al.  Estimation and Prediction for Stochastic Blockstructures , 2001 .

[19]  Andrew Postlewaite,et al.  Social Norms and Social Assets , 2011 .

[20]  Edoardo M. Airoldi,et al.  Mixed Membership Stochastic Blockmodels , 2007, NIPS.

[21]  Fernando A. Quintana,et al.  Nonparametric Bayesian data analysis , 2004 .

[22]  Carl E. Rasmussen,et al.  The Infinite Gaussian Mixture Model , 1999, NIPS.

[23]  R. Fildes Journal of the American Statistical Association : William S. Cleveland, Marylyn E. McGill and Robert McGill, The shape parameter for a two variable graph 83 (1988) 289-300 , 1989 .

[24]  Wei Chu,et al.  Extensions of Gaussian Processes for Ranking : Semi-supervised and Active Learning , 2005 .

[25]  E. Rowland Theory of Games and Economic Behavior , 1946, Nature.

[26]  Scott Sanner,et al.  Real-time Multiattribute Bayesian Preference Elicitation with Pairwise Comparison Queries , 2010, AISTATS.

[27]  Wei Chu,et al.  Preference learning with Gaussian processes , 2005, ICML.