STAR-GCN: Stacked and Reconstructed Graph Convolutional Networks for Recommender Systems

We propose a new STAcked and Reconstructed Graph Convolutional Networks (STAR-GCN) architecture to learn node representations for boosting the performance in recommender systems, especially in the cold start scenario. STAR-GCN employs a stack of GCN encoder-decoders combined with intermediate supervision to improve the final prediction performance. Unlike the graph convolutional matrix completion model with one-hot encoding node inputs, our STAR-GCN learns low-dimensional user and item latent factors as the input to restrain the model space complexity. Moreover, our STAR-GCN can produce node embeddings for new nodes by reconstructing masked input node embeddings, which essentially tackles the cold start problem. Furthermore, we discover a label leakage issue when training GCN-based models for link prediction tasks and propose a training strategy to avoid the issue. Empirical results on multiple rating prediction benchmarks demonstrate our model achieves state-of-the-art performance in four out of five real-world datasets and significant improvements in predicting ratings in the cold start scenario. The code implementation is available in this https URL.

[1]  Max Welling,et al.  Modeling Relational Data with Graph Convolutional Networks , 2017, ESWC.

[2]  Dit-Yan Yeung,et al.  Collaborative Deep Learning for Recommender Systems , 2014, KDD.

[3]  Lidong Bing,et al.  Difficulty Controllable Question Generation for Reading Comprehension , 2018, ArXiv.

[4]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[5]  Maksims Volkovs,et al.  DropoutNet: Addressing Cold Start in Recommender Systems , 2017, NIPS.

[6]  Pradeep Ravikumar,et al.  Collaborative Filtering with Graph Information: Consistency and Scalable Methods , 2015, NIPS.

[7]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[8]  Michael R. Lyu,et al.  Learning to Rank Using Localized Geometric Mean Metrics , 2017, SIGIR.

[9]  Dit-Yan Yeung,et al.  Machine Learning for Spatiotemporal Sequence Forecasting: A Survey , 2018, ArXiv.

[10]  Michael R. Lyu,et al.  STELLAR: Spatial-Temporal Latent Ranking for Successive Point-of-Interest Recommendation , 2016, AAAI.

[11]  Max Welling,et al.  Variational Graph Auto-Encoders , 2016, ArXiv.

[12]  Daniel M. Roy,et al.  Neural Network Matrix Factorization , 2015, ArXiv.

[13]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[14]  Lidong Bing,et al.  Difficulty Controllable Generation of Reading Comprehension Questions , 2018, IJCAI.

[15]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[16]  Pierre Vandergheynst,et al.  Geometric Deep Learning: Going beyond Euclidean data , 2016, IEEE Signal Process. Mag..

[17]  原田 秀逸 私の computer 環境 , 1998 .

[18]  Xavier Bresson,et al.  Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks , 2017, NIPS.

[19]  Yehuda Koren,et al.  Matrix Factorization Techniques for Recommender Systems , 2009, Computer.

[20]  Hao Ma,et al.  GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs , 2018, UAI.

[21]  Scott Sanner,et al.  AutoRec: Autoencoders Meet Collaborative Filtering , 2015, WWW.

[22]  Arieh Iserles,et al.  On the Foundations of Computational Mathematics , 2001 .

[23]  F. Maxwell Harper,et al.  The MovieLens Datasets: History and Context , 2016, TIIS.

[24]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[25]  Max Welling,et al.  Graph Convolutional Matrix Completion , 2017, ArXiv.

[26]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[27]  Irwin King,et al.  Thread Popularity Prediction and Tracking with a Permutation-invariant Model , 2018, EMNLP.

[28]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2009, Found. Comput. Math..