Arithmetic formats for implementing artificial neural networks on FPGAs

This paper investigates the effect of arithmetic representationformats on the implementationof artificial neural networks (ANNs) on field-programmable gate arrays (FPGAs). The focus is on examining the tradeoffs between precision and range of various formats and the required FPGA resources. Basic ANN processing elements include multiplication and addition operations. Therefore, floating-point and fixed-point multipliers/adders were implemented and tested on an FPGA, and their area requirements were compared. The results show that for multilayer perceptron neural networks, floating-point formats offer more area-efficient implementation than fixed-point formats without penalty in terms of precision or range. The results also show that the target FPGA device can have a major impact on the resourcesrequired.