GANE: A Generative Adversarial Network Embedding

Network embedding is capable of providing low-dimensional feature representations for various machine learning applications. Current work focuses on: 1) designing the embedding as an unsupervised learning task to explicitly preserve the structural connectivity in the network or 2) generating the embedding as a by-product during the supervised learning of a specific discriminative task in a deep neural network. In this paper, we aim to take advantage of these two lines of research in the view of multi-output learning. That is, we propose a generative adversarial network embedding (GANE) model to adapt the generative adversarial framework to achieve the network embedding learning during the specific machine learning tasks. GANE has a generator to generate link edges, and a discriminator to distinguish the generated link edges from real connections (edges) in the network. Wasserstein-1 distance is adopted to train the generator to gain better stability. GANE is further extended by utilizing the pairwise connectivity of vertices to preserve the structural information in the original network. Experiments with real-world network data sets demonstrate that our models constantly outperform state-of-the-art solutions with significant improvements for the tasks of link prediction, clustering, and network alignment.

[1]  Kai Ming Ting,et al.  Precision and Recall , 2017, Encyclopedia of Machine Learning and Data Mining.

[2]  Charu C. Aggarwal,et al.  Heterogeneous Network Embedding via Deep Architectures , 2015, KDD.

[3]  Lantao Yu,et al.  SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.

[4]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[5]  Jason Weston,et al.  Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.

[6]  Wenwu Zhu,et al.  Structural Deep Network Embedding , 2016, KDD.

[7]  Hiroshi Mamitsuka,et al.  Latent Feature Kernels for Link Prediction on Sparse Graphs , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Jie Tang,et al.  ArnetMiner: extraction and mining of academic social networks , 2008, KDD.

[9]  Léon Bottou,et al.  Wasserstein GAN , 2017, ArXiv.

[10]  Jure Leskovec,et al.  {SNAP Datasets}: {Stanford} Large Network Dataset Collection , 2014 .

[11]  Philip S. Yu,et al.  Multiple Anonymized Social Networks Alignment , 2015, 2015 IEEE International Conference on Data Mining.

[12]  Léon Bottou,et al.  Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.

[13]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[14]  Chris Dyer,et al.  Notes on Noise Contrastive Estimation and Negative Sampling , 2014, ArXiv.

[15]  Francis Bach,et al.  Global alignment of protein–protein interaction networks by graph matching methods , 2009, Bioinform..

[16]  C. Villani Optimal Transport: Old and New , 2008 .

[17]  Pan He,et al.  Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[18]  Ronald J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[19]  Peng Zhang,et al.  IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models , 2017, SIGIR.

[20]  Hang Li,et al.  Effective Representing of Information Network by Variational Autoencoder , 2017, IJCAI.

[21]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[22]  H. Hotelling Analysis of a complex of statistical variables into principal components. , 1933 .

[23]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[24]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[25]  Alexander J. Smola,et al.  Distributed large-scale natural graph factorization , 2013, WWW.

[26]  Chengqi Zhang,et al.  Network Representation Learning: A Survey , 2017, IEEE Transactions on Big Data.

[27]  Chun Chen,et al.  Mapping Users across Networks by Manifold Alignment on Hypergraph , 2014, AAAI.

[28]  Jian Pei,et al.  Community Preserving Network Embedding , 2017, AAAI.

[29]  Jure Leskovec,et al.  node2vec: Scalable Feature Learning for Networks , 2016, KDD.

[30]  Yong Yu,et al.  ASNets: A Benchmark Dataset of Aligned Social Networks for Cross-Platform User Modeling , 2016, CIKM.

[31]  Minyi Guo,et al.  GraphGAN: Graph Representation Learning with Generative Adversarial Nets , 2017, AAAI.

[32]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[33]  Pieter Abbeel,et al.  InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.

[34]  Mingzhe Wang,et al.  LINE: Large-scale Information Network Embedding , 2015, WWW.

[35]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[36]  Maoguo Gong,et al.  Learning to Map Social Network Users by Unified Manifold Alignment on Hypergraph , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[37]  Xiaolong Jin,et al.  Predict Anchor Links across Social Networks via an Embedding Approach , 2016, IJCAI.

[38]  Philip S. Yu,et al.  Integrated Anchor and Social Link Predictions across Social Networks , 2015, IJCAI.

[39]  Aapo Hyvärinen,et al.  Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics , 2012, J. Mach. Learn. Res..

[40]  Zhen Wang,et al.  Knowledge Graph Embedding by Translating on Hyperplanes , 2014, AAAI.

[41]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[42]  Li Liu,et al.  Aligning Users across Social Networks Using Network Embedding , 2016, IJCAI.

[43]  Yann LeCun,et al.  Energy-based Generative Adversarial Network , 2016, ICLR.

[44]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.