Improving Readability for Automatic Speech Recognition Transcription

Modern Automatic Speech Recognition (ASR) systems can achieve high performance in terms of recognition accuracy. However, a perfectly accurate transcript still can be challenging to read due to grammatical errors, disfluency, and other errata common in spoken communication. Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline. In this work, we propose a novel NLP task called ASR post-processing for readability (APR) that aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker. In addition, we describe a method to address the lack of task-specific data by synthesizing examples for the APR task using the datasets collected for Grammatical Error Correction (GEC) followed by text-to-speech (TTS) and ASR. Furthermore, we propose metrics borrowed from similar tasks to evaluate performance on the APR task. We compare fine-tuned models based on several open-sourced and adapted pre-trained models with the traditional pipeline method. Our results suggest that finetuned models improve the performance on the APR task significantly, hinting at the potential benefits of using APR systems. We hope that the read, understand, and rewrite approach of our work can serve as a basis that many NLP tasks and human readers can benefit from.

[1]  C. Anantaram,et al.  Repairing General-Purpose ASR Output to Improve Accuracy of Spoken Sentences in Specific Domains Using Artificial Development Approach , 2016, IJCAI.

[2]  Sylviane Granger The computer learner corpus: a versatile new source of data for SLA research: Sylviane Granger , 2014 .

[3]  Frank K. Soong,et al.  Modeling Multi-speaker Latent Space to Improve Neural TTS: Quick Enrolling New Speaker and Enhancing Premium Voice , 2018, ArXiv.

[4]  Yiming Yang,et al.  XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.

[5]  Joel Tetreault,et al.  Enabling Robust Grammatical Error Correction in New Domains: Data Sets, Metrics, and Analyses , 2019, Transactions of the Association for Computational Linguistics.

[6]  Jan Niehues,et al.  Segmentation and punctuation prediction in speech language translation using a monolingual translation system , 2012, IWSLT.

[7]  Richard Socher,et al.  A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.

[8]  Matt Post,et al.  Ground Truth for Grammatical Error Correction Metrics , 2015, ACL.

[9]  Andrei Popescu-Belis,et al.  Using the TED Talks to Evaluate Spoken Post-editing of Machine Translation , 2016, LREC.

[10]  Karin M. Verspoor,et al.  Findings of the 2016 Conference on Machine Translation , 2016, WMT.

[11]  Josef van Genabith,et al.  Neural Automatic Post-Editing Using Prior Alignment and Reranking , 2017, EACL.

[12]  Angela Fan,et al.  Controllable Abstractive Summarization , 2017, NMT@ACL.

[13]  Michiel Bacchiani,et al.  Restoring punctuation and capitalization in transcribed speech , 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.

[14]  Chris Hokamp,et al.  Ensembling Factored Neural Machine Translation Models for Automatic Post-Editing and Quality Estimation , 2017, WMT.

[15]  Richard Socher,et al.  Learned in Translation: Contextualized Word Vectors , 2017, NIPS.

[16]  Ted Briscoe,et al.  The BEA-2019 Shared Task on Grammatical Error Correction , 2019, BEA@ACL.

[17]  Xu Tan,et al.  MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.

[18]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Kentaro Inui,et al.  An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction , 2019, EMNLP.

[20]  Omer Levy,et al.  BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.

[21]  Yuji Matsumoto,et al.  Tense and Aspect Error Correction for ESL Learners Using Global Context , 2012, ACL.

[22]  Josef van Genabith,et al.  A Neural Network based Approach to Automatic Post-Editing , 2016, ACL.

[23]  Zhiming Chen,et al.  Neural Post-Editing Based on Quality Estimation , 2017, WMT.

[24]  Tanja Schultz,et al.  Sentence segmentation and punctuation recovery for spoken language translation , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.

[25]  Youssef Bassil,et al.  Post-Editing Error Correction Algorithm for Speech Recognition using Bing Spelling Suggestion , 2012, ArXiv.

[26]  Yuji Matsumoto,et al.  Mining Revision Log of Language Learning SNS for Automated Japanese Error Correction of Second Language Learners , 2011, IJCNLP.

[27]  Yo Joong Choe,et al.  A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning , 2019, BEA@ACL.

[28]  Michaela Kucharová,et al.  Discretion of Speech Units for the Text Post-processing Phase of Automatic Transcription (in the Czech Language) , 2012, TSD.

[29]  Fernando Batista,et al.  Recovering capitalization and punctuation marks for automatic speech recognition: Case study for Portuguese broadcast news , 2008, Speech Commun..

[30]  Xiaodong Liu,et al.  Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.

[31]  Joel R. Tetreault,et al.  JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction , 2017, EACL.

[32]  Tara N. Sainath,et al.  A Spelling Correction Model for End-to-end Speech Recognition , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[33]  Sebastian Ruder,et al.  Universal Language Model Fine-tuning for Text Classification , 2018, ACL.

[34]  Andreas Stolcke,et al.  The Microsoft 2017 Conversational Speech Recognition System , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[35]  Chin-Yew Lin,et al.  ORANGE: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation , 2004, COLING.

[36]  Alec Radford,et al.  Improving Language Understanding by Generative Pre-Training , 2018 .

[37]  Boris Ginsburg,et al.  Correction of Automatic Speech Recognition with Transformer Sequence-To-Sequence Model , 2019, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[38]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[39]  Jan Niehues,et al.  Punctuation insertion for real-time spoken language translation , 2017, IWSLT.

[40]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[41]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.

[42]  Santanu Pal,et al.  Multi-source Neural Automatic Post-Editing: FBK’s participation in the WMT 2017 APE shared task , 2017, WMT.

[43]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[44]  Hwee Tou Ng,et al.  Better Evaluation for Grammatical Error Correction , 2012, NAACL.

[45]  Quoc V. Le,et al.  Semi-supervised Sequence Learning , 2015, NIPS.

[46]  Ming Zhou,et al.  Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study , 2018, ArXiv.

[47]  Omer Levy,et al.  RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.

[48]  Kishore Papineni Machine Translation Evaluation: N-grams to the Rescue , 2002, LREC.

[49]  Hwee Tou Ng,et al.  The CoNLL-2013 Shared Task on Grammatical Error Correction , 2013, CoNLL Shared Task.

[50]  Horia Cucu,et al.  Statistical Error Correction Methods for Domain-Specific ASR Systems , 2013, SLSP.

[51]  Maria Shugrina,et al.  Formatting Time-Aligned ASR Transcripts for Readability , 2010, NAACL.

[52]  Michaela Kucharová,et al.  Post-processing of the recognized speech for web presentation of large audio archive , 2012, 2012 35th International Conference on Telecommunications and Signal Processing (TSP).

[53]  Marcin Junczys-Dowmunt,et al.  Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data , 2019, BEA@ACL.

[54]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[55]  Haoqi Li,et al.  Learning from past mistakes: improving automatic speech recognition output via noisy-clean phrase context modeling , 2018, APSIPA Transactions on Signal and Information Processing.

[56]  Furu Wei,et al.  Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting , 2019, ArXiv.

[57]  George Kurian,et al.  Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.

[58]  Helen Yannakoudakis,et al.  A New Dataset and Method for Automatically Grading ESOL Texts , 2011, ACL.