Adding more data does not always help: A study in medical conversation summarization with PEGASUS

Medical conversation summarization is integral in capturing information gathered during interactions between patients and physicians. Summarized conversations are used to facilitate patient hand-offs between physicians, and as part of providing care in the future. Summaries, however, can be time-consuming to produce and require domain expertise. Modern pre-trained NLP models such as PEGASUS have emerged as capable alternatives to human summarization, reaching state-of-the-art performance on many summarization benchmarks. However, many downstream tasks still require at least moderately sized datasets to achieve satisfactory performance. In this work we (1) explore the effect of dataset size on transfer learning medical conversation summarization using PEGASUS and (2) evaluate various iterative labeling strategies in the low-data regime, following their success in the classification setting. We find that model performance saturates with increase in dataset size and that the various active-learning strategies evaluated all show equivalent performance consistent with simple dataset size increase. We also find that naive iterative pseudo-labeling is on-par or slightly worse than no pseudo-labeling. Our work sheds light on the successes and challenges of translating low-data regime techniques in classification to medical conversation summarization and helps guides future work in this space. Relevant code available at https: //github.com/curai/curai-research/tree/ main/medical-summarization-ML4H-2021. ∗ Duke University. Work done while author was a research intern at Curai.

[1]  Wendy W. Chapman,et al.  ConText: An algorithm for determining negation, experiencer, and temporal status from clinical reports , 2009, J. Biomed. Informatics.

[2]  Xavier Amatriain,et al.  Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures. , 2020, FINDINGS.

[3]  David Berthelot,et al.  FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence , 2020, NeurIPS.

[4]  Christine A. Sinsky,et al.  Relationship Between Clerical Burden and Characteristics of the Electronic Environment With Physician Burnout and Professional Satisfaction. , 2016, Mayo Clinic proceedings.

[5]  Chin-Yew Lin,et al.  ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.

[6]  Quoc V. Le,et al.  Unsupervised Data Augmentation , 2019, ArXiv.

[7]  Yao Zhao,et al.  PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.

[8]  Liwen Xu,et al.  ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarization , 2021, BIONLP.

[9]  Namit Katariya,et al.  Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization , 2021, NLPMC.

[10]  Diyi Yang,et al.  MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification , 2020, ACL.

[11]  Yajuan Lyu,et al.  BDKG at MEDIQA 2021: System Report for the Radiology Report Summarization Task , 2021, BIONLP.

[12]  Vishrav Chaudhary,et al.  Self-training Improves Pre-training for Natural Language Understanding , 2020, NAACL.

[13]  Asma Ben Abacha,et al.  Question-aware Transformer Models for Consumer Health Question Summarization , 2021, J. Biomed. Informatics.