Multi-level acoustic segmentation of continuous speech

As part of the goal to better understand the relationship between the speech signal and the underlying phonemic representation, the authors have developed a procedure that describes the acoustic structure of the signal. Acoustic events are embedded in a multi-level structure in which information ranging from coarse to fine is represented in an organized fashion. An analysis of the acoustic structure, using 500 utterances from 100 different talkers, show that it captures over 96% of the acoustic-phonetic events of interest with an insertion rate of less than 5%. The signal representation, and the algorithms for determining the acoustic segments and the multi-level structure are described. Performance results and a comparison with scale-space filtering is also included. Possible use of this segmental description for automatic speech recognition is discussed.<<ETX>>

[1]  M. A. Bush,et al.  Acoustic-phonetic segment classification and scale-space filtering , 1987, ICASSP '87. IEEE International Conference on Acoustics, Speech, and Signal Processing.

[2]  R. Lyon Speech recognition in scale space , 1987, ICASSP '87. IEEE International Conference on Acoustics, Speech, and Signal Processing.

[3]  James Glass,et al.  Detection and recognition of nasal consonants in American English , 1986, ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing.

[4]  Andrew P. Witkin,et al.  Scale-space filtering: A new approach to multi-scale description , 1984, ICASSP.