暂无分享,去创建一个
Pierre Zweigenbaum | Hicham El Boukkouri | Olivier Ferret | Thomas Lavergne | Hiroshi Noji | Junichi Tsujii | Junichi Tsujii | Hiroshi Noji | T. Lavergne | Pierre Zweigenbaum | Olivier Ferret
[1] David A. Moore,et al. BERT Goes to Law School: Quantifying the Competitive Advantage of Access to Large Legal Corpora in Contract Understanding , 2019, ArXiv.
[2] Christopher D. Manning,et al. Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models , 2016, ACL.
[3] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[4] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[5] James Demmel,et al. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes , 2019, ICLR.
[6] Jingqi Wang,et al. Enhancing Clinical Concept Extraction with Contextual Embedding , 2019, J. Am. Medical Informatics Assoc..
[7] Shuying Shen,et al. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text , 2011, J. Am. Medical Informatics Assoc..
[8] Anália Lourenço,et al. Overview of the BioCreative VI chemical-protein interaction Track , 2017 .
[9] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[10] Paloma Martínez,et al. The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions , 2013, J. Biomed. Informatics.
[11] Rotem Dror,et al. Deep Dominance - How to Properly Compare Deep Neural Models , 2019, ACL.
[12] Zhiyong Lu,et al. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets , 2019, BioNLP@ACL.
[13] Alexander M. Rush,et al. Character-Aware Neural Language Models , 2015, AAAI.
[14] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[15] Bhuwan Dhingra,et al. Combating Adversarial Misspellings with Robust Word Recognition , 2019, ACL.
[16] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Wanxiang Che,et al. Pre-Training with Whole Word Masking for Chinese BERT , 2019, ArXiv.
[19] Quoc V. Le,et al. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators , 2020, ICLR.
[20] Pierre Zweigenbaum,et al. Embedding Strategies for Specialized Domains: Application to Clinical Entity Recognition , 2019, ACL.
[21] Koray Kavukcuoglu,et al. Learning word embeddings efficiently with noise-contrastive estimation , 2013, NIPS.
[22] Maosong Sun,et al. ERNIE: Enhanced Language Representation with Informative Entities , 2019, ACL.
[23] Jürgen Schmidhuber,et al. Training Very Deep Networks , 2015, NIPS.
[24] Yonghui Wu,et al. Exploring the Limits of Language Modeling , 2016, ArXiv.
[25] Alexey Romanov,et al. Lessons from Natural Language Inference in the Clinical Domain , 2018, EMNLP.
[26] Hao Tian,et al. ERNIE 2.0: A Continual Pre-training Framework for Language Understanding , 2019, AAAI.
[27] Peter Szolovits,et al. MIMIC-III, a freely accessible critical care database , 2016, Scientific Data.
[28] Hongfang Liu,et al. MedSTS: a resource for clinical semantic textual similarity , 2018, Language Resources and Evaluation.
[29] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[30] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[31] Jaewoo Kang,et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining , 2019, Bioinform..
[32] Philip S. Yu,et al. Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT , 2020, ArXiv.
[33] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.