Reconfigurable hardware for neural networks: binary versus stochastic

This paper is focused on hardware implementation of neural networks. We propose a reconfigurable, low-cost and readily available hardware architecture for an artificial neuron. For this purpose, we use field-programmable gate arrays i.e. FPGAs. As the state-of-the-art FPGAs still lack the gate density necessary to the implementation of large neural networks of thousands of neurons, we use a stochastic process to implement efficiently the computation performed by a neuron. This paper describes and compares the characteristics of two architectures designed to implement feed-forward fully connected artificial neural networks: the first FPGA prototype is based on traditional adders and multipliers of binary inputs while the second takes advantage of stochastic representation of the inputs. The paper compares both prototypes using the time × area classic factor.