RETRO: Relation Retrofitting For In-Database Machine Learning on Textual Data

There are massive amounts of textual data residing in databases, valuable for many machine learning (ML) tasks. Since ML techniques depend on numerical input representations, word embeddings are increasingly utilized to convert symbolic representations such as text into meaningful numbers. However, a naive one-to-one mapping of each word in a database to a word embedding vector is not sufficient and would lead to poor accuracies in ML tasks. Thus, we argue to additionally incorporate the information given by the database schema into the embedding, e.g. which words appear in the same column or are related to each other. In this paper, we propose RETRO (RElational reTROfitting), a novel approach to learn numerical representations of text values in databases, capturing the best of both worlds, the rich information encoded by word embeddings and the relational information encoded by database tables. We formulate relation retrofitting as a learning problem and present an efficient algorithm solving it. We investigate the impact of various hyperparameters on the learning problem and derive good settings for all of them. Our evaluation shows that the proposed embeddings are ready-to-use for many ML tasks such as classification and regression and even outperform state-of-the-art techniques in integration tasks such as null value imputation and link prediction.

[1]  Christopher Ré,et al.  Towards a unified architecture for in-RDBMS analytics , 2012, SIGMOD Conference.

[2]  Ying Zhang,et al.  IDEL: In-Database Entity Linking with Neural Embeddings , 2018, ArXiv.

[3]  Kevin Chen-Chuan Chang,et al.  A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications , 2017, IEEE Transactions on Knowledge and Data Engineering.

[4]  David Vandyke,et al.  Counter-fitting Word Vectors to Linguistic Constraints , 2016, NAACL.

[5]  Jon M. Kleinberg,et al.  The link-prediction problem for social networks , 2007, J. Assoc. Inf. Sci. Technol..

[6]  Xiaojun Wan,et al.  Representation Learning for Aspect Category Detection in Online Reviews , 2015, AAAI.

[7]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[8]  Gang Wang,et al.  RC-NET: A General Framework for Incorporating Knowledge into Word Representations , 2014, CIKM.

[9]  Thorsten Joachims,et al.  Evaluation methods for unsupervised word embeddings , 2015, EMNLP.

[10]  Naonori Ueda,et al.  Unsupervised Object Matching for Relational Data , 2018, ArXiv.

[11]  Stephen Clark,et al.  Specializing Word Embeddings for Similarity or Relatedness , 2015, EMNLP.

[12]  Gustavo Alonso,et al.  ColumnML: Column-Store Machine Learning with On-The-Fly Data Transformation , 2018, Proc. VLDB Endow..

[13]  Alessandro Lenci,et al.  Distributional Models of Word Meaning , 2018 .

[14]  Yulia Tsvetkov,et al.  Sparse Overcomplete Word Vector Representations , 2015, ACL.

[15]  Eneko Agirre,et al.  Single or Multiple? Combining Word Representations Independently Learned from Text and WordNet , 2016, AAAI.

[16]  R. Speer,et al.  An Ensemble Method to Produce High-Quality Word Embeddings , 2016, ArXiv.

[17]  Tim Kraska,et al.  MLbase: A Distributed Machine-learning System , 2013, CIDR.

[18]  Peter J. Haas,et al.  Simulation of database-valued markov chains using SimSQL , 2013, SIGMOD '13.

[19]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[20]  Palash Goyal,et al.  Graph Embedding Techniques, Applications, and Performance: A Survey , 2017, Knowl. Based Syst..

[21]  Christopher Potts,et al.  Retrofitting Distributional Embeddings to Knowledge Graphs with Functional Relations , 2017, COLING.

[22]  Felix Bießmann,et al.  "Deep" Learning for Missing Value Imputationin Tables with Non-Numerical Data , 2018, CIKM.

[23]  Oded Shmueli,et al.  Using Word Embedding to Enable Semantic Queries in Relational Databases , 2017, DEEM@SIGMOD.

[24]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[25]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[26]  Kun Li,et al.  The MADlib Analytics Library or MAD Skills, the SQL , 2012, Proc. VLDB Endow..

[27]  Markus Krötzsch,et al.  Wikidata , 2014, Commun. ACM.

[28]  James R. Glass,et al.  A Vector Space Approach for Aspect Based Sentiment Analysis , 2015, VS@HLT-NAACL.

[29]  Mark Dredze,et al.  Improving Lexical Embeddings with Semantic Knowledge , 2014, ACL.

[30]  Timothy Dozat,et al.  Incorporating Nesterov Momentum into Adam , 2016 .

[31]  Zhen Wang,et al.  Aligning Knowledge and Text Embeddings by Entity Descriptions , 2015, EMNLP.

[32]  Wojciech Czarnecki,et al.  How to evaluate word embeddings? On importance of data efficiency and simple supervised tasks , 2017, ArXiv.

[33]  Norman May,et al.  The SAP HANA Database -- An Architecture Overview , 2012, IEEE Data Eng. Bull..

[34]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[35]  Zhen Wang,et al.  Knowledge Graph and Text Jointly Embedding , 2014, EMNLP.

[36]  Shafiq R. Joty,et al.  Distributed Representations of Tuples for Entity Resolution , 2018, Proc. VLDB Endow..

[37]  Raul Castro Fernandez,et al.  Termite: a system for tunneling through heterogeneous data , 2019, aiDM@SIGMOD.

[38]  Michael Günther FREDDY: Fast Word Embeddings in Database Systems , 2018, SIGMOD Conference.