Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations
暂无分享,去创建一个
Martin Jaggi | Aymeric Dieuleveut | Sidak Pal Singh | Andreas Hug | Martin Jaggi | Aymeric Dieuleveut | Andreas Hug
[1] Richard Sinkhorn. A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices , 1964 .
[2] John B. Goodenough,et al. Contextual correlates of synonymy , 1965, CACM.
[3] Kenneth Ward Church,et al. Word Association Norms, Mutual Information, and Lexicography , 1989, ACL.
[4] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[5] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[6] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[7] David J. Weir,et al. A General Framework for Distributional Similarity , 2003, EMNLP.
[8] Leonidas J. Guibas,et al. The Earth Mover's Distance as a Metric for Image Retrieval , 2000, International Journal of Computer Vision.
[9] Ido Dagan,et al. The Distributional Inclusion Hypotheses and Lexical Entailment , 2005, ACL.
[10] L. Kantorovich. On the Translocation of Masses , 2006 .
[11] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[12] Jason Weston,et al. A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.
[13] Mirella Lapata,et al. Vector-based Models of Semantic Composition , 2008, ACL.
[14] C. Villani. Optimal Transport: Old and New , 2008 .
[15] Silvia Bernardini,et al. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora , 2009, Lang. Resour. Evaluation.
[16] Ido Dagan,et al. Directional distributional similarity for lexical inference , 2010, Natural Language Engineering.
[17] Guillaume Carlier,et al. Barycenters in the Wasserstein Space , 2011, SIAM J. Math. Anal..
[18] Alessandro Lenci,et al. How we BLESSed distributional semantic evaluation , 2011, GEMS.
[19] Andrew Y. Ng,et al. Improving Word Representations via Global Context and Multiple Word Prototypes , 2012, ACL.
[20] Raffaella Bernardi,et al. Entailment above the word level in distributional semantics , 2012, EACL.
[21] Eneko Agirre,et al. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity , 2012, *SEMEVAL.
[22] Marco Cuturi,et al. Sinkhorn Distances: Lightspeed Computation of Optimal Transport , 2013, NIPS.
[23] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[24] Eneko Agirre,et al. *SEM 2013 shared task: Semantic Textual Similarity , 2013, *SEMEVAL.
[25] Omer Levy,et al. Dependency-Based Word Embeddings , 2014, ACL.
[26] Qin Lu,et al. Chasing Hypernyms in Vector Spaces with Entropy , 2014, EACL.
[27] Claire Cardie,et al. SemEval-2014 Task 10: Multilingual Semantic Textual Similarity , 2014, *SEMEVAL.
[28] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[29] Phil Blunsom,et al. A Convolutional Neural Network for Modelling Sentences , 2014, ACL.
[30] Mihai Surdeanu,et al. The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.
[31] Saif Mohammad,et al. Experiments with three approaches to recognizing lexical entailment , 2014, Natural Language Engineering.
[32] Laura Rimell,et al. Distributional Lexical Entailment by Topic Coherence , 2014, EACL.
[33] Manaal Faruqui,et al. Community Evaluation and Exchange of Word Vectors at wordvectors.org , 2014, ACL.
[34] Wanxiang Che,et al. Learning Sense-specific Word Embeddings By Exploiting Bilingual Resources , 2014, COLING.
[35] Omer Levy,et al. Focused Entailment Graphs for Open IE Propositions , 2014, CoNLL.
[36] David J. Weir,et al. Learning to Distinguish Hypernyms and Co-Hyponyms , 2014, COLING.
[37] Arnaud Doucet,et al. Fast Computation of Wasserstein Barycenters , 2013, ICML.
[38] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[39] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[40] Omer Levy,et al. Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.
[41] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[42] Andrew McCallum,et al. Word Representations via Gaussian Embedding , 2014, ICLR.
[43] Claire Cardie,et al. SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability , 2015, *SEMEVAL.
[44] Matt J. Kusner,et al. From Word Embeddings To Document Distances , 2015, ICML.
[45] Omer Levy,et al. Improving Distributional Similarity with Lessons Learned from Word Embeddings , 2015, TACL.
[46] Stephen Clark,et al. Exploiting Image Generality for Lexical Entailment Detection , 2015, ACL.
[47] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[48] Omer Levy,et al. A Simple Word Embedding Model for Lexical Substitution , 2015, VS@HLT-NAACL.
[49] Chu-Ren Huang,et al. EVALution 1.0: an Evolving Semantic Dataset for Training and Evaluation of Distributional Semantic Models , 2015, LDL@IJCNLP.
[50] Alessandro Moschitti,et al. Twitter Sentiment Analysis with Deep Convolutional Neural Networks , 2015, SIGIR.
[51] Gabriel Peyré,et al. Iterative Bregman Projections for Regularized Transportation Problems , 2014, SIAM J. Sci. Comput..
[52] Hal Daumé,et al. Deep Unordered Composition Rivals Syntactic Methods for Text Classification , 2015, ACL.
[53] Nemanja Djuric,et al. E-commerce in Your Inbox: Product Recommendations at Scale , 2015, KDD.
[54] Jure Leskovec,et al. node2vec: Scalable Feature Learning for Networks , 2016, KDD.
[55] Philipp Koehn,et al. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2016 .
[56] Eneko Agirre,et al. SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation , 2016, *SEMEVAL.
[57] Gabriel Peyré,et al. Stochastic Optimization for Large-scale Optimal Transport , 2016, NIPS.
[58] James Henderson,et al. A Vector Space for Distributional Semantics for Entailment , 2016, ACL.
[59] Ido Dagan,et al. Improving Hypernymy Detection with an Integrated Path-based and Distributional Method , 2016, ACL.
[60] Matt J. Kusner,et al. Supervised Word Mover's Distance , 2016, NIPS.
[61] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[62] Gabriel Peyré,et al. Fast Dictionary Learning with a Smoothed Wasserstein Loss , 2016, AISTATS.
[63] Douwe Kiela,et al. Poincaré Embeddings for Learning Hierarchical Representations , 2017, NIPS.
[64] Jason Altschuler,et al. Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration , 2017, NIPS.
[65] Sanjeev Arora,et al. A Simple but Tough-to-Beat Baseline for Sentence Embeddings , 2017, ICLR.
[66] James Henderson,et al. Learning Word Embeddings for Hyponymy with Entailment-Based Distributional Semantics , 2017, ArXiv.
[67] Thomas Hofmann,et al. Leveraging Large Amounts of Weakly Supervised Data for Multi-Language Sentiment Classification , 2017, WWW.
[68] James Zijun Wang,et al. Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering , 2017, ACL.
[69] Meng Zhang,et al. Earth Mover’s Distance Minimization for Unsupervised Bilingual Lexicon Induction , 2017, EMNLP.
[70] Felix Hill,et al. HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment , 2016, CL.
[71] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[72] Andrew Gordon Wilson,et al. Multimodal Word Distributions , 2017, ACL.
[73] Holger Schwenk,et al. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data , 2017, EMNLP.
[74] Andrew McCallum,et al. Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection , 2017, NAACL.
[75] Philipp Koehn,et al. Findings of the 2018 Conference on Machine Translation (WMT18) , 2018, WMT.
[76] William Chan,et al. InferLite: Simple Universal Sentence Representations from Natural Language Inference Data , 2018, EMNLP.
[77] Douwe Kiela,et al. SentEval: An Evaluation Toolkit for Universal Sentence Representations , 2018, LREC.
[78] Matteo Pagliardini,et al. Unsupervised Learning of Sentence Embeddings Using Compositional n-Gram Features , 2017, NAACL.
[79] Graham Neubig,et al. When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation? , 2018, NAACL.
[80] Marco Cuturi,et al. Generalizing Point Embeddings using the Wasserstein Space of Elliptical Distributions , 2018, NeurIPS.
[81] Xuanjing Huang,et al. Reinforced Evolutionary Neural Architecture Search , 2018, ArXiv.
[82] Jason Weston,et al. StarSpace: Embed All The Things! , 2017, AAAI.
[83] Thomas Hofmann,et al. Hyperbolic Entailment Cones for Learning Hierarchical Embeddings , 2018, ICML.
[84] Wei Liu,et al. Distilled Wasserstein Learning for Word Embedding and Topic Modeling , 2018, NeurIPS.
[85] Christopher Joseph Pal,et al. Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning , 2018, ICLR.
[86] Pradeep Ravikumar,et al. Word Mover’s Embedding: From Word2Vec to Document Embedding , 2018, EMNLP.
[87] Edouard Grave,et al. Unsupervised Alignment of Embeddings with Wasserstein Procrustes , 2018, AISTATS.
[88] Justin Solomon,et al. Learning Embeddings into Entropic Wasserstein Spaces , 2019, ICLR.
[89] Gary Bécigneul,et al. Poincaré GloVe: Hyperbolic Word Embeddings , 2018, ICLR.