Exploring Effects of Random Walk Based Minibatch Selection Policy on Knowledge Graph Completion

In this paper, we have explored the effects of different minibatch sampling techniques in Knowledge Graph Completion. Knowledge Graph Completion (KGC) or Link Prediction is the task of predicting missing facts in a knowledge graph. KGC models are usually trained using margin, soft-margin or cross-entropy loss function that promotes assigning a higher score or probability for true fact triplets. Minibatch gradient descent is used to optimize these loss functions for training the KGC models. But, as each minibatch consists of only a few randomly sampled triplets from a large knowledge graph, any entity that occurs in a minibatch, occurs only once in most cases. Because of this, these loss functions ignore all other neighbors of any entity, whose embedding is being updated at some minibatch step. In this paper, we propose a new random-walk based minibatch sampling technique for training KGC models that optimizes the loss incurred by a minibatch of closely connected subgraph of triplets instead of randomly selected ones. We have shown results of experiments for different models and datasets with our sampling technique and found that the proposed sampling algorithm has varying effects on these datasets/models. Specifically, we find that our proposed method achieves state-of-the-art performance on the DB100K dataset.

[1]  Kathleen Daly,et al.  Volume 7 , 1998 .

[2]  Mausam,et al.  Open Information Extraction Systems and Downstream Applications , 2016, IJCAI.

[3]  Pasquale Minervini,et al.  Convolutional 2D Knowledge Graph Embeddings , 2017, AAAI.

[4]  Michael I. Jordan,et al.  Advances in Neural Information Processing Systems 30 , 1995 .

[5]  Jianfeng Gao,et al.  Embedding Entities and Relations for Learning and Inference in Knowledge Bases , 2014, ICLR.

[6]  Jens Lehmann,et al.  DBpedia: A Nucleus for a Web of Open Data , 2007, ISWC/ASWC.

[7]  Oren Etzioni,et al.  Open Information Extraction from the Web , 2007, CACM.

[8]  Xinlei Chen,et al.  Never-Ending Learning , 2012, ECAI.

[9]  Jian-Yun Nie,et al.  RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space , 2018, ICLR.

[10]  Ohad Shamir,et al.  Optimal Distributed Online Prediction Using Mini-Batches , 2010, J. Mach. Learn. Res..

[11]  Hugo Larochelle,et al.  Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality , 2015, CVSC.

[12]  Danqi Chen,et al.  Observed versus latent features for knowledge base and text inference , 2015, CVSC.

[13]  Manohar Kaul,et al.  Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs , 2019, ACL.

[14]  Zhiyuan Liu,et al.  OpenKE: An Open Toolkit for Knowledge Embedding , 2018, EMNLP.

[15]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.

[16]  Alexander Shapiro,et al.  Stochastic Approximation approach to Stochastic Programming , 2013 .

[17]  Christos Faloutsos,et al.  Sampling from large graphs , 2006, KDD '06.

[18]  Nicholas I. M. Gould,et al.  SIAM Journal on Optimization , 2012 .

[19]  Praveen Paritosh,et al.  Freebase: a collaboratively created graph database for structuring human knowledge , 2008, SIGMOD Conference.

[20]  Jason Weston,et al.  Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.

[21]  Dimitrios Gunopulos,et al.  Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining , 2006, KDD 2006.

[22]  Li Guo,et al.  Improving Knowledge Graph Embedding Using Simple Constraints , 2018, ACL.