Learning to Compose Task-Specific Tree Structures
暂无分享,去创建一个
[1] Hong Yu,et al. Neural Semantic Encoders , 2016, EACL.
[2] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[3] Phil Blunsom,et al. A Convolutional Neural Network for Modelling Sentences , 2014, ACL.
[4] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[5] Yoshua Bengio,et al. Hierarchical Multiscale Recurrent Neural Networks , 2016, ICLR.
[6] Hongyu Guo,et al. Long Short-Term Memory Over Recursive Structures , 2015, ICML.
[7] Stephen Clark,et al. Jointly learning sentence embeddings and syntax with unsupervised Tree-LSTMs , 2017, Natural Language Engineering.
[8] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[9] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[10] Georgiana Dinu,et al. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors , 2014, ACL.
[11] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[12] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[13] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[14] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[15] Hong Yu,et al. Neural Tree Indexers for Text Understanding , 2016, EACL.
[16] Tom Minka,et al. A* Sampling , 2014, NIPS.
[17] Jeffrey Pennington,et al. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions , 2011, EMNLP.
[18] Christopher Potts,et al. A Fast Unified Model for Parsing and Sentence Understanding , 2016, ACL.
[19] Jure Leskovec,et al. Inferring Networks of Substitutable and Complementary Products , 2015, KDD.
[20] Wang Ling,et al. Learning to Compose Words into Sentences with Reinforcement Learning , 2016, ICLR.
[21] Zhen-Hua Ling,et al. Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference , 2017, RepEval@EMNLP.
[22] John Cocke,et al. Programming languages and their compilers: Preliminary notes , 1969 .
[23] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[24] Tony A. Plate,et al. Holographic reduced representations , 1995, IEEE Trans. Neural Networks.
[25] Claire Cardie,et al. Deep Recursive Neural Networks for Compositionality in Language , 2014, NIPS.
[26] Alexander M. Rush,et al. Structured Attention Networks , 2017, ICLR.
[27] Rui Yan,et al. Natural Language Inference by Tree-Based Convolution and Heuristic Matching , 2015, ACL.
[28] Daniel H. Younger,et al. Recognition and Parsing of Context-Free Languages in Time n^3 , 1967, Inf. Control..
[29] Yoshua Bengio,et al. On the Properties of Neural Machine Translation: Encoder–Decoder Approaches , 2014, SSST@EMNLP.
[30] Richard Socher,et al. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.
[31] Mirella Lapata,et al. Composition in Distributional Models of Semantics , 2010, Cogn. Sci..
[32] Yoav Goldberg,et al. An Efficient Algorithm for Easy-First Non-Directional Dependency Parsing , 2010, NAACL.
[33] Peter Norvig,et al. Deep Learning with Dynamic Computation Graphs , 2017, ICLR.
[34] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[35] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.
[36] Tadao Kasami,et al. An Efficient Recognition and Syntax-Analysis Algorithm for Context-Free Languages , 1965 .
[37] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[38] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[39] Richard Socher,et al. Learned in Translation: Contextualized Word Vectors , 2017, NIPS.
[40] Xuanjing Huang,et al. Sentence Modeling with Gated Recursive Neural Network , 2015, EMNLP.
[41] Han Zhao,et al. Self-Adaptive Hierarchical Sentence Model , 2015, IJCAI.
[42] Fabio Massimo Zanzotto,et al. Distributed Tree Kernels , 2012, ICML.
[43] Mehrnoosh Sadrzadeh,et al. Experimental Support for a Categorical Compositional Distributional Model of Meaning , 2011, EMNLP.
[44] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[45] Mohit Bansal,et al. Shortcut-Stacked Sentence Encoders for Multi-Domain Inference , 2017, RepEval@EMNLP.
[46] Ilya Sutskever,et al. Learning to Generate Reviews and Discovering Sentiment , 2017, ArXiv.
[47] Felix Hill,et al. Learning Distributed Representations of Sentences from Unlabelled Data , 2016, NAACL.
[48] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[49] Christopher D. Manning,et al. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks , 2015, ACL.
[50] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[51] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[52] Omer Levy,et al. Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.
[53] Victor O. K. Li,et al. Neural Machine Translation with Gumbel-Greedy Decoding , 2017, AAAI.