Word Sense Disambiguation (WSD), as a tough task in Natural Language Processing (NLP), aims to identify the correct sense of an ambiguous word in a given context. There are two mainstreams in WSD. Supervised methods mainly utilize labeled context to train a classifier which generates the right probability distribution of word senses. Meanwhile knowledge-based (unsupervised) methods which focus on glosses (word sense definitions) always calculate the similarity of context-gloss pair as score to find out the right word sense. In this paper, we propose a generative adversarial framework WSD-GAN which combines two mainstream methods in WSD. The generative model, based on supervised methods, tries to generate a probability distribution over the word senses. Meanwhile the discriminative model, based on knowledge-based methods, focuses on predicting the relevancy of the context-gloss pairs and identifies the correct pairs over the others. Furthermore, in order to optimize both two models, we leverage policy gradient to enhance the performances of the two models mutually. Our experimental results show that WSD-GAN achieves competitive results on several English all-words WSD datasets.
[1]
Peng Zhang,et al.
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
,
2017,
SIGIR.
[2]
Annalina Caputo,et al.
An Enhanced Lesk Word Sense Disambiguation Algorithm through a Distributional Semantic Model
,
2014,
COLING.
[3]
Zhifang Sui,et al.
Incorporating Glosses into Neural Word Sense Disambiguation
,
2018,
ACL.
[4]
Zhifang Sui,et al.
Leveraging Gloss Knowledge in Neural Word Sense Disambiguation by Hierarchical Co-Attention
,
2018,
EMNLP.
[5]
Mikael Kågebäck,et al.
Word Sense Disambiguation using a Bidirectional LSTM
,
2016,
CogALex@COLING.