A comparison of processor topologies for a fast trainable neural network for speech recognition

A fast processing system is necessary to provide adequate learning speed in multilayer neural networks (NNs). Some schemes for mapping from a multilayer NN to a parallel digital processor topology are discussed. For a mesh topology there exists an optimal point where the computation count is minimum. In order to allow for applications such as a speaker-independent speech recognizer, the authors extend this mesh architecture to operate on sequential or, specifically, spatio-temporal inputs. A pipelining scheme is thus revealed, making it possible to improve the processing throughput. An extension of the processing element structure is obtained by introducing dynamic neurons and a consequent pipelining architecture.<<ETX>>

[1]  Lokendra Shastri,et al.  Learning Phonetic Features Using Connectionist Networks , 1987, IJCAI.

[2]  L. E. Atlas,et al.  A study of regular architectures for digital implementation of neural networks , 1989, IEEE International Symposium on Circuits and Systems,.

[3]  S. Y. King Parallel architectures for artificial neural nets , 1988, [1988] Proceedings. International Conference on Systolic Arrays.

[4]  H. Graf,et al.  A CMOS associative memory chip based on neural networks , 1987, 1987 IEEE International Solid-State Circuits Conference. Digest of Technical Papers.

[5]  Robert J. Marks,et al.  An artificial neural network for spatiotemporal: application to phoneme classification , 1987 .

[6]  Geoffrey E. Hinton,et al.  Phoneme recognition using time-delay neural networks , 1989, IEEE Trans. Acoust. Speech Signal Process..

[7]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[8]  Dan Hammerstrom,et al.  The Connectivity Analysis of Simple Association - or- How Many Connections Do You Need! , 1988 .