ROOT13: Spotting Hypernyms, Co-Hyponyms and Randoms
暂无分享,去创建一个
[1] Ido Dagan,et al. The Distributional Inclusion Hypotheses and Lexical Entailment , 2005, ACL.
[2] Chu-Ren Huang,et al. Taking Antonymy Mask off in Vector Space , 2014, PACLIC.
[3] Stefan Evert,et al. The Statistics of Word Cooccur-rences: Word Pairs and Collocations , 2004 .
[4] G. Murphy,et al. The Big Book of Concepts , 2002 .
[5] Qin Lu,et al. Chasing Hypernyms in Vector Spaces with Entropy , 2014, EACL.
[6] Chu-Ren Huang,et al. Unsupervised Antonym-Synonym Discrimination in Vector Space , 2014 .
[7] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[8] Alessandro Lenci,et al. How we BLESSed distributional semantic evaluation , 2011, GEMS.
[9] Laura Rimell,et al. Distributional Lexical Entailment by Topic Coherence , 2014, EACL.
[10] Omer Levy,et al. Do Supervised Distributional Methods Really Learn Lexical Inference Relations? , 2015, NAACL.
[11] Chu-Ren Huang,et al. EVALution 1.0: an Evolving Semantic Dataset for Training and Evaluation of Distributional Semantic Models , 2015, LDL@IJCNLP.
[12] Patrick Pantel,et al. From Frequency to Meaning: Vector Space Models of Semantics , 2010, J. Artif. Intell. Res..
[13] David J. Weir,et al. Learning to Distinguish Hypernyms and Co-Hyponyms , 2014, COLING.