Maximizing Induced Cardinality Under a Determinantal Point Process

Determinantal point processes (DPPs) are well-suited to recommender systems where the goal is to generate collections of diverse, high-quality items. In the existing literature this is usually formulated as finding the mode of the DPP (the so-called MAP set). However, the MAP objective inherently assumes that the DPP models "optimal" recommendation sets, and yet obtaining such a DPP is nontrivial when there is no ready source of example optimal sets. In this paper we advocate an alternative framework for applying DPPs to recommender systems. Our approach assumes that the DPP simply models user engagements with recommended items, which is more consistent with how DPPs for recommender systems are typically trained. With this assumption, we are able to formulate a metric that measures the expected number of items that a user will engage with. We formalize this optimization of this metric as the Maximum Induced Cardinality (MIC) problem. Although the MIC objective is not submodular, we show that it can be approximated by a submodular function, and that empirically it is well-optimized by a greedy algorithm.

[1]  Barry Smyth,et al.  Similarity vs. Diversity , 2001, ICCBR.

[2]  Uriel Feige,et al.  On maximizing welfare when utility functions are subadditive , 2006, STOC '06.

[3]  M. L. Fisher,et al.  An analysis of approximations for maximizing submodular set functions—I , 1978, Math. Program..

[4]  John Langford,et al.  Off-policy evaluation for slate recommendation , 2016, NIPS.

[5]  Neil J. Hurley,et al.  Novelty and Diversity in Top-N Recommendation -- Analysis and Evaluation , 2011, TOIT.

[6]  Sean M. McNee,et al.  Improving recommendation lists through topic diversification , 2005, WWW '05.

[7]  William W. Hager,et al.  Updating the Inverse of a Matrix , 1989, SIAM Rev..

[8]  Francis R. Bach,et al.  Learning Determinantal Point Processes in Sublinear Time , 2016, AISTATS.

[9]  Ajith Ramanathan,et al.  Practical Diversified Recommendations on YouTube with Determinantal Point Processes , 2018, CIKM.

[10]  Ben Taskar,et al.  Near-Optimal MAP Inference for Determinantal Point Processes , 2012, NIPS.

[11]  Ben Taskar,et al.  Expectation-Maximization for Learning Determinantal Point Processes , 2014, NIPS.

[12]  Suvrit Sra,et al.  Fixed-point algorithms for learning determinantal point processes , 2015, ICML.

[13]  S. Friedland,et al.  Submodular spectral functions of principal submatrices of a hermitian matrix, extensions and applications , 2010, 1007.3478.

[14]  Ben Taskar,et al.  Learning Determinantal Point Processes , 2011, UAI.

[15]  Zhijian Ou,et al.  Block-wise map inference for determinantal point processes with application to change-point detection , 2016, 2016 IEEE Statistical Signal Processing Workshop (SSP).

[16]  Mohit Singh,et al.  Maximizing determinants under partition constraints , 2016, STOC.

[17]  Ulrich Paquet,et al.  Low-Rank Factorization of Determinantal Point Processes , 2017, AAAI.

[18]  Jonathan L. Herlocker,et al.  Evaluating collaborative filtering recommender systems , 2004, TOIS.

[19]  Ben Taskar,et al.  Determinantal Point Processes for Machine Learning , 2012, Found. Trends Mach. Learn..

[20]  Amit Deshpande,et al.  On Sampling and Greedy MAP Inference of Constrained Determinantal Point Processes , 2016, ArXiv.

[21]  Hanning Zhou,et al.  Improving the Diversity of Top-N Recommendation via Determinantal Point Process , 2017, ArXiv.

[22]  Hyunjoong Kim,et al.  Functional Analysis I , 2017 .