Learning to Recognize Speech by Watching Television

Our proposed technique gathers large amounts of speech from open broadcast sources and combines it with automatically obtained text or closed captioning to identify suitable speech-training material. George Zavaliagkos and Thomas Colthurst worked on a different approach to this method that uses confidence scoring on the acoustic data itself to improve performance in the absence of any transcribed data, but their approach only yielded marginal results. Our initial efforts also provided only limited success with small amounts of data. We describe our approach to collecting almost unlimited amounts of accurately transcribed speech data. This information serves as training data for the acoustic model component of most high-accuracy speaker-independent speech-recognition systems. The error-ridden closed-captioned text aligns with the similarly error-ridden speech recognizer output. We assume matching segments of sufficient length are reliable transcriptions of the corresponding speech. We then use these segments as the training data for an improved speech recognizer.