Acoustic analysis and recognition of whispered speech

In this paper, the acoustic properties and recognition of whispered speech is discussed, A whispered speech database that consists of whispered speech, nonnal speech and their corresponding facial video images of more than 6,000 sentences from 100 speakers was prepared. The comparison between whispered and nonnal utterances show that 1) the cepstrum distance between them is 4 dB for voiced and 2 dB for unvoiced phonemes, respectively, 2) the spectral tilt of the whispered speech is less sloped than the nonnal speech and 3) the frequency of the lower formants (below 1.5 kHz) is higher than that of the nonnal speech, Acoustic models (HMM) trained by the whispered speech database attain an accuracy of 68% in word recognition experiments. This accuracy can be improved to 78% when MLLR adaptation is applied, while the nonnal speech HMM adapted with the whispered speech attain only 62 % word accuracy.