暂无分享,去创建一个
Xavier Amatriain | Namit Katariya | Anitha Kannan | Ilya Valmianski | Varun Nair | X. Amatriain | Anitha Kannan | Ilya Valmianski | Namit Katariya | Varun Nair
[1] Wendy W. Chapman,et al. ConText: An algorithm for determining negation, experiencer, and temporal status from clinical reports , 2009, J. Biomed. Informatics.
[2] Xavier Amatriain,et al. Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures. , 2020, FINDINGS.
[3] David Berthelot,et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence , 2020, NeurIPS.
[4] Christine A. Sinsky,et al. Relationship Between Clerical Burden and Characteristics of the Electronic Environment With Physician Burnout and Professional Satisfaction. , 2016, Mayo Clinic proceedings.
[5] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[6] Quoc V. Le,et al. Unsupervised Data Augmentation , 2019, ArXiv.
[7] Yao Zhao,et al. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.
[8] Liwen Xu,et al. ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarization , 2021, BIONLP.
[9] Namit Katariya,et al. Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization , 2021, NLPMC.
[10] Diyi Yang,et al. MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification , 2020, ACL.
[11] Yajuan Lyu,et al. BDKG at MEDIQA 2021: System Report for the Radiology Report Summarization Task , 2021, BIONLP.
[12] Vishrav Chaudhary,et al. Self-training Improves Pre-training for Natural Language Understanding , 2020, NAACL.
[13] Asma Ben Abacha,et al. Question-aware Transformer Models for Consumer Health Question Summarization , 2021, J. Biomed. Informatics.