Word Embeddings - Skip Gram Model

Word embedding is of great importance for any NLP task. Word embeddings is used to map a word using a dictionary to a vector. Skip gram model is a type of model to learn word embeddings. This model will predict the surrounding words based on the given input words which are within the given distance. It aims to predict the context from the given word. Words occurring in similar contexts tend to have similar meaning. Therefore it can capture the semantic relationship between the words. This paper explains about the word embedding using skip gram model, its architecture and implementation.

[1]  Stephen Clark,et al.  Learning Adjective Meanings with a Tensor-Based Skip-Gram Model , 2015, CoNLL.

[2]  Mauro Dragoni,et al.  A Neural Word Embeddings Approach for Multi-Domain Sentiment Analysis , 2017, IEEE Transactions on Affective Computing.

[3]  Yorick Wilks,et al.  A Closer Look at Skip-gram Modelling , 2006, LREC.

[4]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[5]  Omer Levy,et al.  word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method , 2014, ArXiv.

[6]  Yanqing Zhang,et al.  Using Word2Vec to process big text data , 2015, 2015 IEEE International Conference on Big Data (Big Data).

[7]  Snehal Bhoir,et al.  Comparative analysis of different word embedding models , 2017, 2017 International Conference on Advances in Computing, Communication and Control (ICAC3).