Angular-Based Word Meta-Embedding Learning

Ensembling word embeddings to improve distributed word representations has shown good success for natural language processing tasks in recent years. These approaches either carry out straightforward mathematical operations over a set of vectors or use unsupervised learning to find a lower-dimensional representation. This work compares meta-embeddings trained for different losses, namely loss functions that account for angular distance between the reconstructed embedding and the target and those that account normalized distances based on the vector length. We argue that meta-embeddings are better to treat the ensemble set equally in unsupervised learning as the respective quality of each embedding is unknown for upstream tasks prior to meta-embedding. We show that normalization methods that account for this such as cosine and KL-divergence objectives outperform meta-embedding trained on standard $\ell_1$ and $\ell_2$ loss on \textit{defacto} word similarity and relatedness datasets and find it outperforms existing meta-learning strategies.

[1]  Evgeniy Gabrilovich,et al.  Large-scale learning of word relatedness with constraints , 2012, KDD.

[2]  Wenpeng Yin,et al.  Learning Meta-Embeddings by Using Ensembles of Embedding Sets , 2015, 1508.04257.

[3]  Gemma Boleda,et al.  Distributional Semantics in Technicolor , 2012, ACL.

[4]  Wenpeng Yin,et al.  Learning Word Meta-Embeddings by Using Ensembles of Embedding Sets , 2015, ArXiv.

[5]  Xueqi Cheng,et al.  Learning Word Representations by Jointly Modeling Syntagmatic and Paradigmatic Relations , 2015, ACL.

[6]  Hinrich Schütze,et al.  AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes , 2015, ACL.

[7]  Quoc V. Le,et al.  Addressing the Rare Word Problem in Neural Machine Translation , 2014, ACL.

[8]  Ronan Collobert,et al.  Word Embeddings through Hellinger PCA , 2013, EACL.

[9]  John B. Goodenough,et al.  Contextual correlates of synonymy , 1965, CACM.

[10]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[11]  Danushka Bollegala,et al.  Learning Word Meta-Embeddings by Autoencoding , 2018, COLING.

[12]  Marco Idiart,et al.  Matrix Factorization using Window Sampling and Negative Sampling for Improved Word Representations , 2016, ACL.

[13]  Sanjeev Arora,et al.  A Latent Variable Model Approach to PMI-based Word Embeddings , 2015, TACL.

[14]  Felix Hill,et al.  SimLex-999: Evaluating Semantic Models With (Genuine) Similarity Estimation , 2014, CL.

[15]  Danushka Bollegala,et al.  Learning linear transformations between counting-based and prediction-based word embeddings , 2017, PloS one.

[16]  Danushka Bollegala,et al.  Frustratingly Easy Meta-Embedding - Computing Meta-Embeddings by Averaging Source Word Embeddings , 2018, NAACL-HLT.

[17]  Ehud Rivlin,et al.  Placing search in context: the concept revisited , 2002, TOIS.

[18]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.