Automatic Segmentation, Classification and Clustering of Broadcast News Audio

Automatic recognition of broadcast feeds from radio and television sources has been gaining importance recently, especially with the success of systems such as the CMU Informedia system [1]. In this work we describe the problems faced in adapting a system built to recognize one utterance at a time to a task that requires recognition of an entire half hour show. We break the problem into three components: segmentation, classification, and clustering. We show that a priori knowledge of acoustic conditions and speakers in the broadcast data is not required for segmentation. The system is able to detect changes in acoustics, recognize previously observed conditions, and use this to pool adaptation data. We also describe a novel application of the Symmetric Kullback-Leibler distance metric that is used as a single solution to both the segmentation and clustering problems. The three components are evaluated through comparisons between the Partitioned and Unpartitioned components of the 1996 ARPA Hub 4 evaluation test set.