Multi-source training and adaptation for generic speech recognition

In recent years there has been a considerable amount of work devoted to porting speech recognizers to new tasks. Recognition systems are usually tuned to a particular task and porting the system to a new task (or language) is both time-consuming and expensive. In this paper, issues in speech recognition portability are addressed and in particular the development of generic models for speech recognition. Multi-source training techniques aimed at enhancing the genericity of some wide domain models are investigated. We show that multi-source training and adaptation can reduce the performance gap between taskindependent and task-dependent acoustic models, and for some tasks even out-perform task-dependent acoustic models.