Implanting Rational Knowledge into Distributed Representation at Morpheme Level

Previously, researchers paid no attention to the creation of unambiguous morpheme embeddings independent from the corpus, while such information plays an important role in expressing the exact meanings of words for parataxis languages like Chinese. In this paper, after constructing the Chinese lexical and semantic ontology based on word-formation, we propose a novel approach to implanting the structured rational knowledge into distributed representation at morpheme level, naturally avoiding heavy disambiguation in the corpus. We design a template to create the instances as pseudo-sentences merely from the pieces of knowledge of morphemes built in the lexicon. To exploit hierarchical information and tackle the data sparseness problem, the instance proliferation technique is applied based on similarity to expand the collection of pseudo-sentences. The distributed representation for morphemes can then be trained on these pseudo-sentences using word2vec. For evaluation, we validate the paradigmatic and syntagmatic relations of morpheme embeddings, and apply the obtained embeddings to word similarity measurement, achieving significant improvements over the classical models by more than 5 Spearman scores or 8 percentage points, which shows very promising prospects for adoption of the new source of knowledge.

[1]  James Pustejovsky,et al.  The Generative Lexicon , 1995, CL.

[2]  Zhifang Sui,et al.  Incorporating Glosses into Neural Word Sense Disambiguation , 2018, ACL.

[3]  Christophe Gravier,et al.  Dict2vec : Learning Word Embeddings using Lexical Dictionaries , 2017, EMNLP.

[4]  Hao Chang-ling Theoretical Findings of HowNet , 2007 .

[5]  Christopher D. Manning,et al.  Better Word Representations with Recursive Neural Networks for Morphology , 2013, CoNLL.

[6]  Jerome L. Myers,et al.  Research Design and Statistical Analysis , 1991 .

[7]  Zhiyuan Liu,et al.  Joint Learning of Character and Word Embeddings , 2015, IJCAI.

[8]  Maciej Piasecki,et al.  WordNet2Vec: Corpora Agnostic Word Vectorization Method , 2016, Neurocomputing.

[9]  Zhiyuan Liu,et al.  Improved Word Representation Learning with Sememes , 2017, ACL.

[10]  趙 元任,et al.  A grammar of spoken Chinese = 中國話的文法 , 1968 .

[11]  Marco Marelli,et al.  Compositional-ly Derived Representations of Morphologically Complex Words in Distributional Semantics , 2013, ACL.

[12]  Zhifang Sui,et al.  Leveraging Gloss Knowledge in Neural Word Sense Disambiguation by Hierarchical Co-Attention , 2018, EMNLP.

[13]  Liu Yang,et al.  Lexical Knowledge Representation and Sense Prediction of Chinese Unknown Words , 2016 .

[14]  Wei Li,et al.  Overview of the NLPCC-ICCPOL 2016 Shared Task: Chinese Word Similarity Measurement , 2016, NLPCC/ICCPOL.

[15]  Christiane Fellbaum,et al.  Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.

[16]  P. Matthews Inflectional Morphology: A Theoretical Study Based on Aspects of Latin Verb Conjugation , 1972 .

[17]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[18]  Peng Jin,et al.  SemEval-2012 Task 4: Evaluating Chinese Word Similarity , 2012, SemEval@NAACL-HLT.