Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic Captions

This project compares the accuracy of two automatic speech recognition (ASR) systems–Bing Speech and YouTube’s automatic captions–across gender, race and four dialects of American English. The dialects included were chosen for their acoustic dissimilarity. Bing Speech had differences in word error rate (WER) between dialects and ethnicities, but they were not statistically reliable. YouTube’s automatic captions, however, did have statistically different WERs between dialects and races. The lowest average error rates were for General American and white talkers, respectively. Neither system had a reliably different WER between genders, which had been previously reported for YouTube’s automatic captions [1]. However, the higher error rate non-white talkers is worrying, as it may reduce the utility of these systems for talkers of color.

[1]  Matthew J. Gordon,et al.  Small-Town Values and Big-City Vowels: A Study of the Northern Cities Shift in Michigan , 2000 .

[2]  Ian R. Lane,et al.  Pronunciation modeling for dialectal arabic speech recognition , 2009, 2009 IEEE Workshop on Automatic Speech Recognition & Understanding.

[3]  Natalie Schilling-Estes,et al.  American English: Dialects and Variation , 1998 .

[4]  Izhak Shafran,et al.  Discriminative pronunciation modeling for dialectal speech recognition , 2014, INTERSPEECH.

[5]  J. Rickford,et al.  African American Vernacular English: Features, Evolution, Educational Implications , 1999 .

[6]  Ye-Yi Wang,et al.  Is word error rate a good indicator for spoken language understanding accuracy , 2003, 2003 IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE Cat. No.03EX721).

[7]  Richard Wright,et al.  The Hyperspace Effect: Phonetic Targets Are Hyperarticulated. , 1993 .

[8]  Fernando Peñalosa Chicano sociolinguistics, a brief introduction , 1981 .

[9]  Alfred Mertins,et al.  Automatic speech recognition and speech variability: A review , 2007, Speech Commun..

[10]  Stephane Champely,et al.  Basic Functions for Power Analysis , 2015 .

[11]  Rachael Tatman,et al.  Gender and Dialect Bias in YouTube’s Automatic Captions , 2017, EthNLP@EACL.

[12]  Geoffrey Zweig,et al.  The microsoft 2016 conversational speech recognition system , 2016, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  Penelope Eckert,et al.  Where do ethnolects stop? , 2008 .

[14]  M Sawalha,et al.  The effects of speakers' gender, age, and region on overall performance of Arabic automatic speech recognition systems using the phonetically rich and balanced Modern Standard Arabic speech corpus , 2013 .

[15]  Stephen J. Cox,et al.  Unsupervised model selection for recognition of regional accented speech , 2014, INTERSPEECH.

[16]  Daniel Jurafsky,et al.  Which words are hard to recognize? Prosodic, lexical, and disfluency factors that increase speech recognition error rates , 2010, Speech Commun..

[17]  John C. Wells,et al.  Accents of English , 1982 .