Consensus Attention-based Neural Networks for Chinese Reading Comprehension
暂无分享,去创建一个
Ting Liu | Guoping Hu | Yiming Cui | Shijin Wang | Zhipeng Chen | Ting Liu | Yiming Cui | Z. Chen | Shijin Wang | Guoping Hu
[1] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[2] Jason Weston,et al. The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations , 2015, ICLR.
[3] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[4] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[5] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[6] Danqi Chen,et al. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.
[7] John Salvatier,et al. Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.
[8] Navdeep Jaitly,et al. Pointer Networks , 2015, NIPS.
[9] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[10] Rudolf Kadlec,et al. Text Understanding with the Attention Sum Reader Network , 2016, ACL.
[11] Wilson L. Taylor,et al. “Cloze Procedure”: A New Tool for Measuring Readability , 1953 .
[12] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[13] Ting Liu,et al. Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution , 2016, ACL.
[14] Razvan Pascanu,et al. On the difficulty of training recurrent neural networks , 2012, ICML.
[15] Wanxiang Che,et al. LTP: A Chinese Language Technology Platform , 2010, COLING.
[16] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.