Speaker Independent Vowel Recognition
暂无分享,去创建一个
In designing artificial devices to perform human perceptual functions which map the initial sensory stimuli to their corresponding responses, there are at least three aspects to be considered: the representation of sensory input, the representation of the output or response, and the mechanism which maps the input to desired output. Since Dudley first invented his vocoder more than four decades ago, many vocoders have been designed to develop a representation of speech in an efficient way such that the representation contains all the information necessary for separating signals and at the same time has minimum redundancy [2]. Within the backprop learning connectionist framework, researchers have tried different network architectures — varying the number of layers of the network, and varying the connectivity, such as Harrison’s experiment with single and multilayer perceptrons, and his use of zonal units instead of making the network fully connected between layers [3]. On the output level, McCulloch and Ainsworth tried two types of output representation in their attempt to recognize steady state vowels [2]. One is local representation in which each unit represents a vowel; the other is based on the vowel quadrilateral in which each vowel is represented by a pair of real numbers indicating the first two formant frequencies. The vowel quadrilateral is illustrated in Fig. 1.
[1] Barak A. Pearlmutter. Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.
[2] Alexander H. Waibel,et al. Modular Construction of Time-Delay Neural Networks for Speech Recognition , 1989, Neural Computation.
[3] J. Flanagan. Speech Analysis, Synthesis and Perception , 1971 .
[4] Teuvo Kohonen,et al. Self-Organization and Associative Memory , 1988 .
[5] G. E. Peterson,et al. Control Methods Used in a Study of the Vowels , 1951 .