Architecture of A Novel Low-Cost Hardware Neural Network

Hardware-based machine learning is becoming increasingly popular due to its high speed of computation. One of the desired characteristics of such hardware is reduced hardware and design costs. This paper proposes a design approach for a neural network to reduce the cost of hardware in terms of adders and multipliers. Adders and multipliers are parts of the main components in the neural network, and they are used in each node in the network. The proposed approach reduces the number of multipliers and adders in the network by half, which reduces the cost. The proposed technique is based on sharing multiplier and adder between two hidden layers. The method has been tested and validated using multiple datasets. The accuracy of the proposed approach is similar to the traditional methods in the literature, while the proposed approach utilizes only half the number of multipliers and adders. The proposed design is implemented using VHDL and Altera Arria 10 GX FPGA. The simulation result shows the proposed method retains the performance of the network with a 63% reduction in the hardware design with acceptable accuracy.

[1]  Rauf Izmailov,et al.  Knowledge transfer in SVM and neural networks , 2017, Annals of Mathematics and Artificial Intelligence.

[2]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[3]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[4]  Magdy A. Bayoumi,et al.  An Efficient Approach for Neural Network Architecture , 2018, 2018 25th IEEE International Conference on Electronics, Circuits and Systems (ICECS).

[5]  Magdy A. Bayoumi,et al.  A Novel Reconfigurable Hardware Architecture of Neural Network , 2019, 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS).

[6]  Jonathan J. Hull,et al.  A Database for Handwritten Text Recognition Research , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Ohad Shamir,et al.  The Power of Depth for Feedforward Neural Networks , 2015, COLT.

[8]  Xiaolin Hu,et al.  Recurrent convolutional neural network for object recognition , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Prabhat,et al.  Artificial Neural Network , 2018, Encyclopedia of GIS.

[10]  Jacob Russell Neterer,et al.  Deep Learning in Natural Language Processing , 2018, Proceedings of the West Virginia Academy of Science.

[11]  K Lehnertz,et al.  Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording region and brain state. , 2001, Physical review. E, Statistical, nonlinear, and soft matter physics.

[12]  Marek Bohrn,et al.  Field Programmable Neural Array for feed-forward neural networks , 2013, 2013 36th International Conference on Telecommunications and Signal Processing (TSP).

[13]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[14]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[15]  Kasem Khalil,et al.  Intelligent Fault-Prediction Assisted Self-Healing for Embryonic Hardware , 2020, IEEE Transactions on Biomedical Circuits and Systems.

[16]  Magdy Bayoumi,et al.  Economic LSTM Approach for Recurrent Neural Networks , 2019, IEEE Transactions on Circuits and Systems II: Express Briefs.

[17]  E. Won,et al.  A hardware implementation of artificial neural networks using field programmable gate arrays , 2007, physics/0703041.

[18]  Gheorghe Serban,et al.  Mobile system with real time route learning using Hardware Artificial Neural Network , 2015, 2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI).

[19]  Magdy Bayoumi,et al.  Machine Learning-Based Approach for Hardware Faults Prediction , 2020, IEEE Transactions on Circuits and Systems I: Regular Papers.

[20]  Ashutosh Vyas,et al.  Deep Learning for Natural Language Processing , 2016 .