In order to establish solutions based on neural networks for low cost products for the mass market, low power and low complex single chip neuro-processors for implementing large neural networks are needed. We introduce a highly optimized hardware design of a low complex and cascadable neuro-processor for realizing feedforward and in particular recurrent neural networks. One main feature of the proposed design is a special mixed floating point and fixed point arithmetic which in contrast to high precision floating point units reduces the necessary word lengths and the overall memory requirements. Moreover a special activity memory structure is used to enable the efficient calculation of recurrent networks omitting communication and data transfer problems. Finally, the application of the proposed design to a speech recognition task and its realization on a FPGA is presented.
[1]
Peter Tiňo,et al.
Learning long-term dependencies is not as difficult with NARX recurrent neural networks
,
1995
.
[2]
M.C. Mozer.
Neural-network speech processing for toys and consumer electronics
,
1996,
IEEE Expert.
[3]
H. Wust,et al.
A monolithic speech recognizer based on fully recurrent neural networks
,
1994,
Proceedings of IEEE Workshop on Neural Networks for Signal Processing.
[4]
Herbert Reininger,et al.
A SPEECH RECOGNIZER BASED ON LOCALLY RECURRENT NEURAL NETWORKS
,
1995
.
[5]
E. Sackinger.
Measurement of Finite-Precision Effects in Handwriting- and Speech-Recognition Algorithms
,
1997
.