Linear Algebraic Structure of Word Senses, with Applications to Polysemy
暂无分享,去创建一个
Sanjeev Arora | Yuanzhi Li | Yingyu Liang | Tengyu Ma | Andrej Risteski | Sanjeev Arora | Yuanzhi Li | Tengyu Ma | Yingyu Liang | Andrej Risteski
[1] J. R. Firth,et al. A Synopsis of Linguistic Theory, 1930-1955 , 1957 .
[2] Andrew Y. Ng,et al. Improving Word Representations via Global Context and Multiple Word Prototypes , 2012, ACL.
[3] Tom M. Mitchell,et al. Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding , 2012, COLING.
[4] Matthew E. P. Davies,et al. SMALLbox - An Evaluation Framework for Sparse Representations and Dictionary Learning Algorithms , 2010, LVA/ICA.
[5] Tiziano Flati,et al. WoSIT: A Word Sense Induction Toolkit for Search Result Clustering and Diversification , 2014, ACL.
[6] Chong Wang,et al. Reading Tea Leaves: How Humans Interpret Topic Models , 2009, NIPS.
[7] Mihai Surdeanu,et al. The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.
[8] Roberto Navigli,et al. Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction , 2013, CL.
[9] Hinrich Schütze,et al. Automatic Word Sense Discrimination , 1998, Comput. Linguistics.
[10] Christiane Fellbaum,et al. Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.
[11] Omer Levy,et al. Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.
[12] Geoffrey Zweig,et al. Linguistic Regularities in Continuous Space Word Representations , 2013, NAACL.
[13] Sanjeev Arora,et al. RAND-WALK: A Latent Variable Model Approach to Word Embeddings , 2015 .
[14] Sanjeev Arora,et al. A Latent Variable Model Approach to PMI-based Word Embeddings , 2015, TACL.
[15] Patrick Pantel,et al. From Frequency to Meaning: Vector Space Models of Semantics , 2010, J. Artif. Intell. Res..
[16] Julia Hirschberg,et al. V-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure , 2007, EMNLP.
[17] Michael W. Mahoney,et al. Skip-Gram − Zipf + Uniform = Vector Additivity , 2017, ACL.
[18] David Yarowsky,et al. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods , 1995, ACL.
[19] Sanjeev Arora,et al. A Simple but Tough-to-Beat Baseline for Sentence Embeddings , 2017, ICLR.
[20] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[21] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[22] Geoffrey E. Hinton,et al. Three new graphical models for statistical language modelling , 2007, ICML '07.
[23] N. Scott. John Rupert Firth , 1961, Bulletin of the School of Oriental and African Studies.
[24] A. Bruckstein,et al. K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation , 2005 .
[25] Thomas L. Griffiths,et al. Hierarchical Topic Models and the Nested Chinese Restaurant Process , 2003, NIPS.
[26] Mark Steyvers,et al. Topics in semantic representation. , 2007, Psychological review.
[27] Kenneth Ward Church,et al. Word Association Norms, Mutual Information, and Lexicography , 1989, ACL.
[28] Andrew McCallum,et al. Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space , 2014, EMNLP.
[29] Julio Gonzalo,et al. The role of named entities in Web People Search , 2009, EMNLP.
[30] M. Elad,et al. $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation , 2006, IEEE Transactions on Signal Processing.
[31] David M. Blei,et al. Probabilistic topic models , 2012, Commun. ACM.
[32] Tom Michael Mitchell,et al. Predicting Human Brain Activity Associated with the Meanings of Nouns , 2008, Science.
[33] Yulia Tsvetkov,et al. Sparse Overcomplete Word Vector Representations , 2015, ACL.
[34] Michael Elad,et al. Sparse and Redundant Representations - From Theory to Applications in Signal and Image Processing , 2010 .
[35] Raymond J. Mooney,et al. Multi-Prototype Vector-Space Models of Word Meaning , 2010, NAACL.
[36] David J. Field,et al. Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.
[37] Pramod Viswanath,et al. Geometry of Polysemy , 2016, ICLR.
[38] Roberto Navigli,et al. SemEval-2013 Task 11: Word Sense Induction and Disambiguation within an End-User Application , 2013, SemEval@NAACL-HLT.
[39] Suresh Manandhar,et al. SemEval-2010 Task 14: Word Sense Induction &Disambiguation , 2010, SemEval@ACL.
[40] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[41] Mirella Lapata,et al. Bayesian Word Sense Induction , 2009, EACL.
[42] Naoaki Okazaki,et al. The mechanism of additive composition , 2015, Machine Learning.
[43] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[44] Kenneth Heafield,et al. N-gram Counts and Language Models from the Common Crawl , 2014, LREC.
[45] Ignacio Iacobacci,et al. SensEmbed: Learning Sense Embeddings for Word and Relational Similarity , 2015, ACL.