Multimodal Deep Network Embedding With Integrated Structure and Attribute Information

Network embedding is the process of learning low-dimensional representations for nodes in a network while preserving node features. Existing studies only leverage network structure information and emphasize the preservation of structural features. However, nodes in real-world networks often have a rich set of attributes providing extra semantic information. It has been demonstrated that both structural and attribute features are important for network analysis tasks. To preserve both features, we investigate the problem of integrating structure and attribute information to perform network embedding and propose a multimodal deep network embedding (MDNE) method. MDNE captures the non-linear network structures and the complex interactions among structures and attributes using a deep model consisting of multiple layers of non-linear functions. Since structures and attributes are two different types of information, a multimodal learning method is adopted to pre-process them and help the model to better capture the correlations between node structure and attribute information. We define the loss function employing structural and attribute proximities to preserve the respective features, and the representations are obtained by minimizing the loss function. Results of extensive experiments on four real-world data sets show that the proposed method performs significantly better than baselines on a variety of tasks, which demonstrates the effectiveness and generality of our method.

[1]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[3]  Francisco Charte,et al.  A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines , 2018, Inf. Fusion.

[4]  Thomas Hofmann,et al.  Unsupervised Learning by Probabilistic Latent Semantic Analysis , 2004, Machine Learning.

[5]  Chengqi Zhang,et al.  User Profile Preserving Social Network Embedding , 2017, IJCAI.

[6]  M. McPherson,et al.  Birds of a Feather: Homophily in Social Networks , 2001 .

[7]  Maoguo Gong,et al.  Learning to Map Social Network Users by Unified Manifold Alignment on Hypergraph , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Mikhail Belkin,et al.  Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering , 2001, NIPS.

[9]  Ludovic Denoyer,et al.  Learning latent representations of nodes for classifying in heterogeneous social networks , 2014, WSDM.

[10]  Xiao Huang,et al.  Accelerated Attributed Network Embedding , 2017, SDM.

[11]  P. V. Marsden,et al.  Homogeneity in confiding relations , 1988 .

[12]  Jian Pei,et al.  A Survey on Network Embedding , 2017, IEEE Transactions on Knowledge and Data Engineering.

[13]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[14]  Nitish Srivastava,et al.  Learning Representations for Multimodal Data with Deep Belief Nets , 2012 .

[15]  Jian Pei,et al.  Community Preserving Network Embedding , 2017, AAAI.

[16]  Mikhail Belkin,et al.  Laplacian Eigenmaps for Dimensionality Reduction and Data Representation , 2003, Neural Computation.

[17]  Chih-Jen Lin,et al.  LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..

[18]  Huan Liu,et al.  Exploiting social relations for sentiment analysis in microblogging , 2013, WSDM.

[19]  Li Pan,et al.  Mining application-aware community organization with expanded feature subspaces from concerned attributes in social networks , 2017, Knowl. Based Syst..

[20]  Huan Liu,et al.  Exploiting homophily effect for trust prediction , 2013, WSDM.

[21]  Feiping Nie,et al.  Cauchy Graph Embedding , 2011, ICML.

[22]  Juhan Nam,et al.  Multimodal Deep Learning , 2011, ICML.

[23]  Jure Leskovec,et al.  node2vec: Scalable Feature Learning for Networks , 2016, KDD.

[24]  Deli Zhao,et al.  Network Representation Learning with Rich Text Information , 2015, IJCAI.

[25]  Joshua B. Tenenbaum,et al.  Separating Style and Content with Bilinear Models , 2000, Neural Computation.

[26]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[27]  Huan Liu,et al.  Robust Unsupervised Feature Selection on Networked Data , 2016, SDM.

[28]  Xiangnan He,et al.  Attributed Social Network Embedding , 2017, IEEE Transactions on Knowledge and Data Engineering.

[29]  Mingzhe Wang,et al.  LINE: Large-scale Information Network Embedding , 2015, WWW.

[30]  Xavier Bresson,et al.  Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering , 2016, NIPS.

[31]  Changsheng Xu,et al.  Learning Consistent Feature Representation for Cross-Modal Multimedia Retrieval , 2015, IEEE Transactions on Multimedia.

[32]  Roman Rosipal,et al.  Overview and Recent Advances in Partial Least Squares , 2005, SLSFS.

[33]  Chengqi Zhang,et al.  Tri-Party Deep Network Representation , 2016, IJCAI.

[34]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..

[35]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[36]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[37]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[38]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[39]  Peng Wu,et al.  Multi-Objective Community Detection Based on Memetic Algorithm , 2015, PloS one.

[40]  Yoshua Bengio,et al.  Why Does Unsupervised Pre-training Help Deep Learning? , 2010, AISTATS.

[41]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[42]  Nitish Srivastava,et al.  Multimodal learning with deep Boltzmann machines , 2012, J. Mach. Learn. Res..

[43]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[44]  Xuelong Li,et al.  Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval , 2017, IEEE Transactions on Image Processing.

[45]  Qiongkai Xu,et al.  GraRep: Learning Graph Representations with Global Structural Information , 2015, CIKM.

[46]  Wenwu Zhu,et al.  Structural Deep Network Embedding , 2016, KDD.

[47]  Jure Leskovec,et al.  Graph Convolutional Neural Networks for Web-Scale Recommender Systems , 2018, KDD.

[48]  Wenjie Li,et al.  A Three-Layered Mutually Reinforced Model for Personalized Citation Recommendation , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[49]  John Shawe-Taylor,et al.  Canonical Correlation Analysis: An Overview with Application to Learning Methods , 2004, Neural Computation.

[50]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[51]  Peng Wu,et al.  Mining Set of Interested Communities with Limited Exemplar Nodes for Network Based Services , 2021, IEEE Transactions on Services Computing.