暂无分享,去创建一个
Bhavana Dalvi | Daniel S. Weld | Arman Cohan | Iz Beltagy | Daniel King | Iz Beltagy | Arman Cohan | Bhavana Dalvi | Daniel King
[1] Ming-Wei Chang,et al. Language Model Pre-training for Hierarchical Document Representations , 2019, ArXiv.
[2] Iryna Gurevych,et al. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging , 2017, EMNLP.
[3] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[4] Peter Szolovits,et al. Hierarchical Neural Networks for Sequential Sentence Classification in Medical Scientific Abstracts , 2018, EMNLP.
[5] W. Richardson,et al. The well-built clinical question: a key to evidence-based decisions. , 1995, ACP journal club.
[6] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[7] Luke S. Zettlemoyer,et al. AllenNLP: A Deep Semantic Natural Language Processing Platform , 2018, ArXiv.
[8] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[9] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[10] Doug Downey,et al. Construction of the Literature Graph in Semantic Scholar , 2018, NAACL.
[11] Bowen Zhou,et al. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents , 2016, AAAI.
[12] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[13] Mirella Lapata,et al. Neural Summarization by Extracting Sentences and Words , 2016, ACL.
[14] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[15] Isabelle Augenstein,et al. A Supervised Approach to Extractive Summarisation of Scientific Papers , 2017, CoNLL.
[16] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[17] Nazli Goharian,et al. Scientific Article Summarization Using Citation-Context and Article’s Discourse Structure , 2015, EMNLP.
[18] Iz Beltagy,et al. SciBERT: A Pretrained Language Model for Scientific Text , 2019, EMNLP.
[19] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[20] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[21] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[22] Mirella Lapata,et al. Ranking Sentences for Extractive Summarization with Reinforcement Learning , 2018, NAACL.
[23] Franck Dernoncourt,et al. PubMed 200k RCT: a Dataset for Sequential Sentence Classification in Medical Abstracts , 2017, IJCNLP.
[24] Xiaodong Liu,et al. Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.
[25] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[26] David Martínez,et al. Automatic classification of sentences to support Evidence Based Medicine , 2011, BMC Bioinformatics.
[27] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.