Revisit Gaussian Embedding: An Effective Method for Scalable Knowledge Graph

Knowledge Graph with Gaussian Embedding (KG2E) is designed to capture the uncertainties of knowledge and hence shows great potential on various applications such as information retrieval, crisis management and so on. However, it is computational expensive, and limited to a Mahalanobis distance which essentially views relations as translation operator. This paper describes the theory and implementation of a highly scalable and effective solver to simplify KG2E, through decomposition of the objective of into two parts: an adaptive margin and a distance metric. For the first part, we provide a sound theoretical guarantee for the equivalence between uncertainties and the adaptive margin. We show that, under certain assumptions, it enjoys a linear time complexity with the help of SVD. For the second part, we extend the distance metric to capture the complicated effect of relations, enhancing the model's flexibility. Compared to original KG2E, our methods boost the convergence as well as the performance (over 10% improvement on Hits@10). On four large-scale knowledge graph benchmarks, our methods also achieve better or comparable performance among the state-of-the-art models.