Speaker-independent consonant classification in continuous speech with distinctive features and neural networks.
暂无分享,去创建一个
This paper provides experimental evidence to the assertion that the design of appropriate neural networks (NN) for speech recognition should be inspired by acoustic and phonetic knowledge, and not only by knowledge in pattern recognition. Rather than investigating the NN learning paradigm, the paper is focused on the influence of the input parameters, of the internal structure, and of the desired output representation on the classification performance of recurrent multilayer perceptrons. As an instructive example, the paper analyzes the problem of classifying ten stop and nasal consonants in continuous speech independently of the speaker. Experiments are reported for the TIMIT database, using 343 speakers in the training set and 77 different speakers in the test set. Comparative experiments show that good performance is obtained when many input acoustic parameters are used, including a time/frequency gradient operator related to transitions of the second formant, and when the desired outputs represent context-dependent articulatory features. Classification is performed by principal component analysis of the NN outputs. Refinement of the design parameters yield increasingly better performance on the test set, ranging from 45% errors for a perceptron without hidden nodes to 23.3% errors for the best NN.