The Common Voice corpus is a massively-multilingual collection of transcribed speech intended for speech technology research and development. Common Voice is designed for Automatic Speech Recognition purposes but can be useful in other domains (e.g. language identification). To achieve scale and sustainability, the Common Voice project employs crowdsourcing for both data collection and data validation. The most recent release includes 29 languages, and as of November 2019 there are a total of 38 languages collecting data. Over 50,000 individuals have participated so far, resulting in 2,500 hours of collected audio. To our knowledge this is the largest audio corpus in the public domain for speech recognition, both in terms of number of hours and number of languages. As an example use case for Common Voice, we present speech recognition experiments using Mozilla’s DeepSpeech Speech-to-Text toolkit. By applying transfer learning from a source English model, we find an average Character Error Rate improvement of 5.99 ± 5.48 for twelve target languages (German, French, Italian, Turkish, Catalan, Slovenian, Welsh, Irish, Breton, Tatar, Chuvash, and Kabyle). For most of these languages, these are the first ever published results on end-to-end Automatic Speech Recognition.
[1]
Jonathan G. Fiscus,et al.
Multiple Dimension Levenshtein Edit Distance Calculations for Evaluating Automatic Speech Recognition Systems During Simultaneous Speech
,
2006,
LREC.
[2]
Jürgen Schmidhuber,et al.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
,
2006,
ICML.
[3]
Yoshua Bengio,et al.
Understanding the difficulty of training deep feedforward neural networks
,
2010,
AISTATS.
[4]
Erich Elsen,et al.
Deep Speech: Scaling up end-to-end speech recognition
,
2014,
ArXiv.
[5]
Mark J. F. Gales,et al.
Speech recognition and keyword spotting for low-resource languages: Babel project research at CUED
,
2014,
SLTU.
[6]
Josh Meyer.
Multi-Task and Transfer Learning in Low-Resource Speech Recognition
,
2019
.