A neural model of centered tri-gram speech recognition

A relaxation network model that includes higher order weight connections is introduced. To demonstrate its utility, the model is applied to the speech recognition domain. Traditional speech recognition systems typically consider only that context preceding the word to be recognized. However, intuition suggests that considering both preceding context as well as following context should improve recognition accuracy. The work described here tests this hypothesis by applying the higher order relaxation network to consider both precedes and follows context in speech recognition. The results demonstrate both the general utility of the higher order relaxation network as well as its improvement over traditional methods on a speech recognition task.

[1]  Barry Cooper Higher order neural networks-Can they help us optimise? , 1995 .

[2]  Jr. G. Forney,et al.  The viterbi algorithm , 1973 .

[3]  Tony R. Martinez,et al.  The robustness of relaxation rates in constraint satisfaction networks , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[4]  Biing-Hwang Juang,et al.  Fundamentals of speech recognition , 1993, Prentice Hall signal processing series.

[5]  Tony R. Martinez,et al.  Improving the Performance of the Hopfield Network By Using A Relaxation Rate , 1999, ICANNGA.

[6]  Patti Price,et al.  The DARPA 1000-word resource management database for continuous speech recognition , 1988, ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing.