Research and Design of a Key Technology for Accelerating Convolution Computation for FPGA-Based CNN

Convolutional Neural Network is an important network model of deep learning algorithm. It is widely used in handwriting recognition, Natural Language Processing, and so on. It is also a hot topic in the research of machine learning and computer image vision, so it has a certain research significance and value. This paper first proposes a simple convolution neural network model—SpNet model, and analyzes the different types of parallelism in the training process of convolution neural network. In view of the extensive use of convolution computation, from the perspective of software and hardware, a scheme to speed up convolution computation is designed.

[1]  Song Han,et al.  EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[2]  Kunihiko Fukushima,et al.  Neocognitron for handwritten digit recognition , 2003, Neurocomputing.

[3]  Andrew Lavin,et al.  Fast Algorithms for Convolutional Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Jianfeng Gao,et al.  A Neural Network Approach to Context-Sensitive Generation of Conversational Responses , 2015, NAACL.

[5]  Mehmet Ali Çavuslu,et al.  FPGA Implementation of Wavelet Neural Network Training with PSO/iPSO , 2017, J. Circuits Syst. Comput..

[6]  Ivars Namatēvs,et al.  Deep Convolutional Neural Networks: Structure, Feature Extraction and Training , 2017 .