暂无分享,去创建一个
Anindya Iqbal | M. Sohel Rahman | Rifat Shahriyar | Tahmid Hasan | Abhik Bhattacharjee | Kazi Samin | Anindya Iqbal | Rifat Shahriyar | Abhik Bhattacharjee | Tahmid Hasan | Kazi Samin | M. Rahman | Kazi Samin Mubasshir
[1] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[2] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[3] Quoc V. Le,et al. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators , 2020, ICLR.
[4] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[5] Laurent Romary,et al. CamemBERT: a Tasty French Language Model , 2019, ACL.
[6] Eva Schlinger,et al. How Multilingual is Multilingual BERT? , 2019, ACL.
[7] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[8] Vishrav Chaudhary,et al. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data , 2019, LREC.
[9] Ion Androutsopoulos,et al. GREEK-BERT: The Greeks visiting Sesame Street , 2020, SETN.
[10] Wilson L. Taylor,et al. “Cloze Procedure”: A New Tool for Measuring Readability , 1953 .
[11] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[12] Benoît Sagot,et al. Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures , 2019 .
[13] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[14] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[15] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[16] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[17] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[18] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[19] Guillaume Lample,et al. Cross-lingual Language Model Pretraining , 2019, NeurIPS.
[20] Md. Saiful Islam,et al. A Deep Recurrent Neural Network with BiLSTM model for Sentiment Classification , 2018, 2018 International Conference on Bangla Speech and Language Processing (ICBSLP).
[21] Giovanni Semeraro,et al. AlBERTo: Italian BERT Language Understanding Model for NLP Challenging Tasks Based on Tweets , 2019, CLiC-it.
[22] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[23] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[24] M. Sohel Rahman,et al. Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation , 2020, EMNLP.
[25] Nabeel Mohammed,et al. Banner: A Cost-Sensitive Contextualized Model for Bangla Named Entity Recognition , 2020, IEEE Access.
[26] Dat Quoc Nguyen,et al. PhoBERT: Pre-trained language models for Vietnamese , 2020, Findings.
[27] Tommaso Caselli,et al. BERTje: A Dutch BERT Model , 2019, ArXiv.
[28] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[29] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[30] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[31] Tomas Mikolov,et al. Bag of Tricks for Efficient Text Classification , 2016, EACL.
[32] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[33] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[34] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[35] Mitesh M. Khapra,et al. AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages , 2020, ArXiv.
[36] Benoît Sagot,et al. What Does BERT Learn about the Structure of Language? , 2019, ACL.
[37] Rashedur M. Rahman,et al. A step towards information extraction: Named entity recognition in Bangla using deep learning , 2019, J. Intell. Fuzzy Syst..
[38] Guillaume Lample,et al. XNLI: Evaluating Cross-lingual Sentence Representations , 2018, EMNLP.