Adversarially Regularized Graph Attention Networks for Inductive Learning on Partially Labeled Graphs

Graph embedding is a general approach to tackling graph-analytic problems by encoding nodes into low-dimensional representations. Most existing embedding methods are transductive since the information of all nodes is required in training, including those to be predicted. In this paper, we propose a novel inductive embedding method for semi-supervised learning on graphs. This method generates node representations by learning a parametric function to aggregate information from the neighborhood using attention mechanism, and hence naturally generalizes to previously unseen nodes. Furthermore, adversarial training serves as an external regularization enforcing the learned representations to match a prior distribution for improving robustness and generalization ability. Experiments on real-world clean or noisy graphs are used to demonstrate the effectiveness of this approach.

[1]  Jian Pei,et al.  Asymmetric Transitivity Preserving Graph Embedding , 2016, KDD.

[2]  Dan Wang,et al.  Adversarial Network Embedding , 2017, AAAI.

[3]  Yanfang Ye,et al.  Heterogeneous Graph Attention Network , 2019, WWW.

[4]  Xavier Bresson,et al.  Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering , 2016, NIPS.

[5]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[6]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[7]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[8]  Ruslan Salakhutdinov,et al.  Revisiting Semi-Supervised Learning with Graph Embeddings , 2016, ICML.

[9]  Xiao-Ming Wu,et al.  Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning , 2018, AAAI.

[10]  Fabio Roli,et al.  Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.

[11]  Xiaojin Zhu,et al.  Introduction to Semi-Supervised Learning , 2009, Synthesis Lectures on Artificial Intelligence and Machine Learning.

[12]  Yann Dauphin,et al.  A Convolutional Encoder Model for Neural Machine Translation , 2016, ACL.

[13]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[14]  Deli Zhao,et al.  Network Representation Learning with Rich Text Information , 2015, IJCAI.

[15]  Omer Levy,et al.  Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.

[16]  P. Rousseeuw Silhouettes: a graphical aid to the interpretation and validation of cluster analysis , 1987 .

[17]  Jian Pei,et al.  A Survey on Network Embedding , 2017, IEEE Transactions on Knowledge and Data Engineering.

[18]  Wenjie Li,et al.  Predictive Network Representation Learning for Link Prediction , 2017, SIGIR.

[19]  Bin Yu,et al.  Node proximity preserved dynamic network embedding via matrix perturbation , 2020, Knowl. Based Syst..

[20]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[21]  Jason Weston,et al.  Deep learning via semi-supervised embedding , 2008, ICML '08.

[22]  Hao Ma,et al.  GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs , 2018, UAI.

[23]  Chengqi Zhang,et al.  User Profile Preserving Social Network Embedding , 2017, IJCAI.

[24]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[25]  Xin Wang,et al.  Deep attributed network representation learning of complex coupling and interaction , 2021, Knowl. Based Syst..

[26]  Ryan A. Rossi,et al.  Attention Models in Graphs: A Survey , 2018 .

[27]  Stephan Gunnemann,et al.  Certifiable Robustness and Robust Training for Graph Convolutional Networks , 2019, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.

[28]  John Glover,et al.  Modeling documents with Generative Adversarial Networks , 2016, ArXiv.

[29]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[30]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[31]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[32]  Xiao Huang,et al.  Label Informed Attributed Network Embedding , 2017, WSDM.

[33]  Chengqi Zhang,et al.  Tri-Party Deep Network Representation , 2016, IJCAI.

[34]  Lina Yao,et al.  Adversarially Regularized Graph Autoencoder , 2018, IJCAI.

[35]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[36]  Charu C. Aggarwal,et al.  Learning Deep Network Representations with Adversarially Regularized Autoencoders , 2018, KDD.

[37]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[38]  Naftali Tishby,et al.  Deep learning and the information bottleneck principle , 2015, 2015 IEEE Information Theory Workshop (ITW).

[39]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[40]  Stephan Günnemann,et al.  Adversarial Attacks on Node Embeddings via Graph Poisoning , 2018, ICML.

[41]  Alex Graves,et al.  Recurrent Models of Visual Attention , 2014, NIPS.

[42]  Percy Liang,et al.  Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.

[43]  Mingzhe Wang,et al.  LINE: Large-scale Information Network Embedding , 2015, WWW.

[44]  Jure Leskovec,et al.  node2vec: Scalable Feature Learning for Networks , 2016, KDD.

[45]  Palash Goyal,et al.  Graph Embedding Techniques, Applications, and Performance: A Survey , 2017, Knowl. Based Syst..

[46]  Trevor Darrell,et al.  Adversarial Feature Learning , 2016, ICLR.

[47]  Jure Leskovec,et al.  Graph Information Bottleneck , 2020, NeurIPS.

[48]  Christos Faloutsos,et al.  ZooBP: Belief Propagation for Heterogeneous Networks , 2017, Proc. VLDB Endow..

[49]  Mikhail Belkin,et al.  Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples , 2006, J. Mach. Learn. Res..

[50]  Samuel S. Schoenholz,et al.  Neural Message Passing for Quantum Chemistry , 2017, ICML.

[51]  Zoubin Ghahramani,et al.  Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions , 2003, ICML 2003.

[52]  Léon Bottou,et al.  Wasserstein Generative Adversarial Networks , 2017, ICML.

[53]  Joan Bruna,et al.  Spectral Networks and Locally Connected Networks on Graphs , 2013, ICLR.

[54]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[55]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[56]  Qiongkai Xu,et al.  GraRep: Learning Graph Representations with Global Structural Information , 2015, CIKM.

[57]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[58]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[59]  Alán Aspuru-Guzik,et al.  Convolutional Networks on Graphs for Learning Molecular Fingerprints , 2015, NIPS.

[60]  Jian Pei,et al.  Community Preserving Network Embedding , 2017, AAAI.

[61]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[62]  Bernhard Schölkopf,et al.  Learning with Local and Global Consistency , 2003, NIPS.

[63]  Yoshua Bengio,et al.  Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.

[64]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .