Novel architecture of a digital neuron for FFNN employing special multiplication

This paper presents the design of a new architecture of digital neurons for use in the feed-forward neural networks (FFNN) and their subsequent implementation on a chip. The proposed neuron uses a special type of multiplication realized by AND gate. Comparison of usual ways of implementing digital feed-forward neural networks using fixed/floating point numbers to the novel architecture using the special multiplication was performed. Consequently, the investigated FFNN architectures were implemented into FPGA and ASIC, where the chip area was the main concern. Chip area and other features of both the new neural network architecture and standard NN architectures we compared and evaluated.

[1]  Randy Yates,et al.  Fixed-Point Arithmetic: An Introduction , 2013 .

[2]  S. Oniga,et al.  Optimizing FPGA implementation of Feed-Forward Neural Networks , 2008, 2008 11th International Conference on Optimization of Electrical and Electronic Equipment.

[3]  Sergios Theodoridis,et al.  Pattern Recognition, Fourth Edition , 2008 .

[4]  Daniela Durackova,et al.  Area Chip Consumption by a Novel Digital CNN Architecture for Pattern Recognition , 2009, ICANN.

[5]  Emil Raschman,et al.  New Digital Architecture of CNN for Pattern Recognition , 2009, 2009 MIXDES-16th International Conference Mixed Design of Integrated Circuits & Systems.

[6]  S. Hariprasath,et al.  FPGA implementation of multilayer feed forward neural network architecture using VHDL , 2012, 2012 International Conference on Computing, Communication and Applications.