NE-FLGC: Network Embedding Based on Fusing Local (First-Order) and Global (Second-Order) Network Structure with Node Content

This paper studies the problem of Representation Learning for network with textual information, which aims to learn low dimensional vectors for nodes by leveraging network structure and textual information. Most existing works only focus on one aspect of network structure and cannot fuse network first-order proximity, second-order proximity and textual information. In this paper, we propose a novel network embedding method NE-FLGC: Network Embedding based on Fusing Local (first-order) and Global (second-order) network structure with node Content. Especially, we adopt context-enhance method that obtains node embedding by concatenating the vector of itself and the context vectors. In experiments, we compare our model with existing network embedding models on four real-world datasets. The experimental results demonstrate that NE-FLGC is stable and significantly outperforms state-of-the-art methods.

[1]  Jie Tang,et al.  ArnetMiner: extraction and mining of academic social networks , 2008, KDD.

[2]  Mikhail Belkin,et al.  Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering , 2001, NIPS.

[3]  Quoc V. Le,et al.  Distributed Representations of Sentences and Documents , 2014, ICML.

[4]  Omer Levy,et al.  Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.

[5]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[6]  David Liben-Nowell,et al.  The link-prediction problem for social networks , 2007 .

[7]  Deli Zhao,et al.  Network Representation Learning with Rich Text Information , 2015, IJCAI.

[8]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[9]  Qiongkai Xu,et al.  GraRep: Learning Graph Representations with Global Structural Information , 2015, CIKM.

[10]  Andrew McCallum,et al.  Automating the Construction of Internet Portals with Machine Learning , 2000, Information Retrieval.

[11]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[12]  Wenwu Zhu,et al.  Structural Deep Network Embedding , 2016, KDD.

[13]  Mingzhe Wang,et al.  LINE: Large-scale Information Network Embedding , 2015, WWW.

[14]  Jure Leskovec,et al.  node2vec: Scalable Feature Learning for Networks , 2016, KDD.

[15]  Christos Faloutsos,et al.  Graphs over time: densification laws, shrinking diameters and possible explanations , 2005, KDD '05.

[16]  R. A. Leibler,et al.  On Information and Sufficiency , 1951 .

[17]  Tomas Mikolov,et al.  Bag of Tricks for Efficient Text Classification , 2016, EACL.

[18]  Zhiyuan Liu,et al.  CANE: Context-Aware Network Embedding for Relation Modeling , 2017, ACL.

[19]  Chengqi Zhang,et al.  Tri-Party Deep Network Representation , 2016, IJCAI.