Towards Generalizable Sentence Embeddings
暂无分享,去创建一个
Raquel Urtasun | Richard S. Zemel | Jamie Ryan Kiros | Eleni Triantafillou | R. Zemel | R. Urtasun | Eleni Triantafillou | J. Kiros
[1] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[2] Kevin Gimpel,et al. Towards Universal Paraphrastic Sentence Embeddings , 2015, ICLR.
[3] Jeffrey Pennington,et al. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection , 2011, NIPS.
[4] Lucila Ohno-Machado,et al. Natural language processing: an introduction , 2011, J. Am. Medical Informatics Assoc..
[5] Hugo Larochelle,et al. An Autoencoder Approach to Learning Bilingual Word Representations , 2014, NIPS.
[6] Di Wang,et al. A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering , 2015, ACL.
[7] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[8] Felix Hill,et al. Learning Distributed Representations of Sentences from Unlabelled Data , 2016, NAACL.
[9] Barbara J. Grosz,et al. Natural-Language Processing , 1982, Artificial Intelligence.
[10] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[11] Quoc V. Le,et al. Grounded Compositional Semantics for Finding and Describing Images with Sentences , 2014, TACL.
[12] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[13] Yoshua Bengio,et al. Learning to Understand Phrases by Embedding the Dictionary , 2015, TACL.
[14] Jason Weston,et al. Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..
[15] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[16] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[17] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[18] Daniel Jurafsky,et al. A Hierarchical Neural Autoencoder for Paragraphs and Documents , 2015, ACL.
[19] Rui Yan,et al. How Transferable are Neural Networks in NLP Applications? , 2016, EMNLP.
[20] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[21] Jason Weston,et al. A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.
[22] Angeliki Lazaridou,et al. Jointly optimizing word representations for lexical and sentential tasks with the C-PHRASE model , 2015, ACL.
[23] Quoc V. Le,et al. Distributed Representations of Sentences and Documents , 2014, ICML.
[24] Omer Levy,et al. Improving Distributional Similarity with Lessons Learned from Word Embeddings , 2015, TACL.
[25] Georgiana Dinu,et al. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors , 2014, ACL.
[26] Wenpeng Yin,et al. Convolutional Neural Network for Paraphrase Identification , 2015, NAACL.
[27] M. Marelli,et al. SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment , 2014, *SEMEVAL.
[28] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.