Hybrid number representation for the FPGA-realization of a versatile neuro-processor

In order to establish solutions based on neural networks for low cost products for the mass market, low power and low complex single chip neuro-processors for implementing large neural networks are needed. We introduce a highly optimized hardware design of a low complex and cascadable neuro-processor for realizing feedforward and in particular recurrent neural networks. One main feature of the proposed design is a special mixed floating point and fixed point arithmetic which in contrast to high precision floating point units reduces the necessary word lengths and the overall memory requirements. Moreover a special activity memory structure is used to enable the efficient calculation of recurrent networks omitting communication and data transfer problems. Finally, the application of the proposed design to a speech recognition task and its realization on a FPGA is presented.