Decentralized Knowledge Graph Representation Learning

Knowledge graph (KG) representation learning methods have achieved competitive performance in many KG-oriented tasks, among which the best ones are usually based on graph neural networks (GNNs), a powerful family of networks that learns the representation of an entity by aggregating the features of its neighbors and itself. However, many KG representation learning scenarios only provide the structure information that describes the relationships among entities, causing that entities have no input features. In this case, existing aggregation mechanisms are incapable of inducing embeddings of unseen entities as these entities have no pre-defined features for aggregation. In this paper, we present a decentralized KG representation learning approach, decentRL, which encodes each entity from and only from the embeddings of its neighbors. For optimization, we design an algorithm to distill knowledge from the model itself such that the output embeddings can continuously gain knowledge from the corresponding original embeddings. Extensive experiments show that the proposed approach performed better than many cutting-edge models on the entity alignment task, and achieved competitive performance on the entity prediction task. Furthermore, under the inductive setting, it significantly outperformed all baselines on both tasks.

[1]  Stephan Günnemann,et al.  Dual-Primal Graph Convolutional Networks , 2018, ArXiv.

[2]  Zhendong Mao,et al.  Knowledge Graph Embedding: A Survey of Approaches and Applications , 2017, IEEE Transactions on Knowledge and Data Engineering.

[3]  Jason Weston,et al.  Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.

[4]  Zhiyuan Liu,et al.  Learning Entity and Relation Embeddings for Knowledge Graph Completion , 2015, AAAI.

[5]  Paolo Merialdo,et al.  Knowledge Graph Embedding for Link Prediction , 2020, ACM Transactions on Knowledge Discovery from Data.

[6]  Stephan Günnemann,et al.  Deep Gaussian Embedding of Attributed Graphs: Unsupervised Inductive Learning via Ranking , 2017, ArXiv.

[7]  Jianfeng Gao,et al.  Embedding Entities and Relations for Learning and Inference in Knowledge Bases , 2014, ICLR.

[8]  Max Welling,et al.  Modeling Relational Data with Graph Convolutional Networks , 2017, ESWC.

[9]  Rui Ye,et al.  A Vectorized Relational Graph Convolutional Network for Multi-Relational Network Alignment , 2019, IJCAI.

[10]  Chengjiang Li,et al.  Multi-Channel Graph Neural Network for Entity Alignment , 2019, ACL.

[11]  Yuan-Fang Li,et al.  Robust Attribute and Structure Preserving Graph Embedding , 2020, PAKDD.

[12]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[13]  Jian-Yun Nie,et al.  RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space , 2018, ICLR.

[14]  Dai Quoc Nguyen,et al.  A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network , 2017, NAACL.

[15]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[16]  Guillaume Bouchard,et al.  Complex Embeddings for Simple Link Prediction , 2016, ICML.

[17]  Yuting Wu,et al.  Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs , 2019, IJCAI.

[18]  Wei Hu,et al.  Bootstrapping Entity Alignment with Knowledge Graph Embedding , 2018, IJCAI.

[19]  Tianqi Chen,et al.  Empirical Evaluation of Rectified Activations in Convolutional Network , 2015, ArXiv.

[20]  Geoffrey Zweig,et al.  Linguistic Regularities in Continuous Space Word Representations , 2013, NAACL.

[21]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[22]  Wei Hu,et al.  Learning to Exploit Long-term Relational Dependencies in Knowledge Graphs , 2019, ICML.

[23]  Philip S. Yu,et al.  A Survey on Knowledge Graphs: Representation, Acquisition and Applications , 2020, ArXiv.

[24]  Geoffrey E. Hinton,et al.  Layer Normalization , 2016, ArXiv.

[25]  Diego Marcheggiani,et al.  Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling , 2017, EMNLP.

[26]  LiYang,et al.  Knowledge verification for long-tail verticals , 2017, VLDB 2017.

[27]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[28]  Zhen Wang,et al.  Knowledge Graph Embedding by Translating on Hyperplanes , 2014, AAAI.

[29]  Wei Hu,et al.  Cross-Lingual Entity Alignment via Joint Attribute-Preserving Embedding , 2017, SEMWEB.

[30]  Bowen Zhou,et al.  End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion , 2018, AAAI.

[31]  Oriol Vinyals,et al.  Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.

[32]  Yoshua Bengio,et al.  Mutual Information Neural Estimation , 2018, ICML.

[33]  Seyed Mehran Kazemi,et al.  SimplE Embedding for Link Prediction in Knowledge Graphs , 2018, NeurIPS.

[34]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[35]  Zhichun Wang,et al.  Cross-lingual Knowledge Graph Alignment via Graph Convolutional Networks , 2018, EMNLP.

[36]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[37]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[38]  Yoshua Bengio,et al.  Learning deep representations by mutual information estimation and maximization , 2018, ICLR.

[39]  Pasquale Minervini,et al.  Convolutional 2D Knowledge Graph Embeddings , 2017, AAAI.

[40]  Vikram Nitin,et al.  Composition-based Multi-Relational Graph Convolutional Networks , 2020, ICLR.

[41]  Danqi Chen,et al.  Observed versus latent features for knowledge base and text inference , 2015, CVSC.

[42]  Phillip Isola,et al.  Contrastive Representation Distillation , 2020, ICLR.

[43]  Wei Hu,et al.  Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood Aggregation , 2019, AAAI.

[44]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.