Enhanced-RCNN: An Efficient Method for Learning Sentence Similarity
暂无分享,去创建一个
Shuang Peng | Xiaolong Li | Jiaxing Zhang | Hengbin Cui | Sujian Li | Niantao Xie | Xiaolong Li | Sujian Li | Jiaxing Zhang | Shuang Peng | Hengbin Cui | Niantao Xie
[1] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[2] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[3] Bowen Zhou,et al. ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs , 2015, TACL.
[4] Jiwei Li,et al. Is Word Segmentation Necessary for Deep Learning of Chinese Representations? , 2019, ACL.
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Houfeng Wang,et al. Learning Summary Prior Representation for Extractive Summarization , 2015, ACL.
[7] Zhiguo Wang,et al. Semi-supervised Clustering for Short Text via Deep Representation Learning , 2016, CoNLL.
[8] Jonas Mueller,et al. Siamese Recurrent Architectures for Learning Sentence Similarity , 2016, AAAI.
[9] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[10] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[11] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Yann LeCun,et al. Signature Verification Using A "Siamese" Time Delay Neural Network , 1993, Int. J. Pattern Recognit. Artif. Intell..
[13] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[14] Zhiguo Wang,et al. Bilateral Multi-Perspective Matching for Natural Language Sentences , 2017, IJCAI.
[15] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[16] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[17] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[18] Maarten Versteegh,et al. Learning Text Similarity with Siamese Recurrent Networks , 2016, Rep4NLP@ACL.
[19] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[20] Xing Xie,et al. DRr-Net: Dynamic Re-Read Network for Sentence Semantic Matching , 2019, AAAI.
[21] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[22] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[23] Zhiguo Wang,et al. Sentence Similarity Learning by Lexical Decomposition and Composition , 2016, COLING.
[24] Wei Wang,et al. Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering , 2018, ACL.
[25] Zhen-Hua Ling,et al. Enhancing Sentence Embedding with Generalized Pooling , 2018, COLING.
[26] Yang Liu,et al. Structured Alignment Networks for Matching Sentences , 2018, EMNLP.
[27] Jian Zhang,et al. Natural Language Inference over Interaction Space , 2017, ICLR.
[28] Zhen-Hua Ling,et al. Enhanced LSTM for Natural Language Inference , 2016, ACL.
[29] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[30] Feng Ji,et al. Simple and Effective Text Matching with Richer Alignment Features , 2019, ACL.