Automatic transcription of general audio data: preliminary analyses

The task of automatically transcribing general audio data is very different from the transcription task typically required of current automatic speech recognition systems. The general goal of this work is to quantify the difficult issues posed by such data, thus leading to an understanding of how a speech recognition system may have to be altered to accommodate the added complexities. Specifically, we describe some preliminary analyses and experiments we have conducted on data collected from a radio news program. We found that using relatively straightforward acoustic measurements and classification techniques, we were able to achieve better than 80% classification accuracy for seven salient sound classes present in the data, and nearly 94% classification accuracy for a speech/non-speech decision. In addition, lexical analysis revealed that while the vocabulary size of a single broadcast is moderate, it grows exponentially as more shows are added.