Lifting Sequence Length Limitations of NLP Models using Autoencoders
暂无分享,去创建一个
[1] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[2] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[3] Xuanjing Huang,et al. How to Fine-Tune BERT for Text Classification? , 2019, CCL.
[4] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[5] Matt J. Kusner,et al. From Word Embeddings To Document Distances , 2015, ICML.
[6] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[7] K. Robert Lai,et al. Dimensional Sentiment Analysis Using a Regional CNN-LSTM Model , 2016, ACL.
[8] Manju Venugopalan,et al. Exploring sentiment analysis on twitter data , 2015, 2015 Eighth International Conference on Contemporary Computing (IC3).
[9] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[10] Tara N. Sainath,et al. Convolutional, Long Short-Term Memory, fully connected Deep Neural Networks , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[11] Andrés Montoyo,et al. Advances on natural language processing , 2007, Data Knowl. Eng..
[12] Yann LeCun,et al. Very Deep Convolutional Networks for Text Classification , 2016, EACL.
[13] Nicholas Zabaras,et al. Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification , 2018, J. Comput. Phys..
[14] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[15] Jason Weston,et al. Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..
[16] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[17] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[18] Arman Cohan,et al. Longformer: The Long-Document Transformer , 2020, ArXiv.
[19] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[20] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[21] Yoav Goldberg,et al. A Primer on Neural Network Models for Natural Language Processing , 2015, J. Artif. Intell. Res..
[22] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[23] Wenpeng Yin,et al. Comparative Study of CNN and RNN for Natural Language Processing , 2017, ArXiv.