Analysis of sentence embedding models using prediction tasks in natural language processing
暂无分享,去创建一个
Yonatan Belinkov | Yoav Goldberg | Yossi Adi | Ofer Lavi | Einat Kermany | Yoav Goldberg | Yossi Adi | Einat Kermany | Ofer Lavi | Yonatan Belinkov
[1] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[2] Clément Farabet,et al. Torch7: A Matlab-like Environment for Machine Learning , 2011, NIPS 2011.
[3] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[4] Georgiana Dinu,et al. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors , 2014, ACL.
[5] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[6] Yonatan Belinkov,et al. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.
[7] Felix Hill,et al. Learning Distributed Representations of Sentences from Unlabelled Data , 2016, NAACL.
[8] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[9] Yang Wang,et al. rnn : Recurrent Library for Torch , 2015, ArXiv.
[10] Alexander M. Rush,et al. Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks , 2016, ArXiv.
[11] Allyson Ettinger,et al. Probing for semantic evidence of composition by means of simple classification tasks , 2016, RepEval@ACL.
[12] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[13] Fei-Fei Li,et al. Visualizing and Understanding Recurrent Networks , 2015, ArXiv.
[14] Christopher D. Manning,et al. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks , 2015, ACL.
[15] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[16] Zhizheng Wu,et al. Investigating gated recurrent neural networks for speech synthesis , 2016, ArXiv.
[17] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[18] Grzegorz Chrupala,et al. Representation of Linguistic Form and Function in Recurrent Neural Networks , 2016, CL.
[19] Omer Levy,et al. Do Supervised Distributional Methods Really Learn Lexical Inference Relations? , 2015, NAACL.
[20] Steven Bird,et al. NLTK: The Natural Language Toolkit , 2002, ACL.
[21] Petr Sojka,et al. Software Framework for Topic Modelling with Large Corpora , 2010 .
[22] Daniel Jurafsky,et al. A Hierarchical Neural Autoencoder for Paragraphs and Documents , 2015, ACL.
[23] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[24] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[25] Deniz Yuret,et al. Why Neural Translations are the Right Length , 2016, EMNLP.
[26] Omer Levy,et al. Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.
[27] Xing Shi,et al. Does String-Based Neural MT Learn Source Syntax? , 2016, EMNLP.
[28] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[29] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[30] Marco Baroni,et al. A practical and linguistically-motivated approach to compositional distributional semantics , 2014, ACL.
[31] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.