Accelerating Convolutional Neural Network Inference Based on a Reconfigurable Sliced Systolic Array

Convolutional neural networks (CNNs) have achieved great successes on many computer vision tasks, such as image recognition, video processing, and target detection. In recent years, many hardware designs have been devoted to accelerating CNN inference. In order to further speed up CNN inference and reduce data waste, this work proposed a reconfigurable sliced systolic array: 1) Depending on the number of network nodes in each layer, the slice mode could be dynamically configured to achieve high throughput and resource utilization. 2) To take full advantage of convolution reuse and weight reuse, this work designed a tile-column sliding (TCS) processing dataflow. 3) A four-stage for loop algorithm was employed, which divides the CNN calculation into several parts based on the input nodes and output nodes. The entire CNN inference is carried out using integer-only arithmetic originated from TensorLite. Experimental results prove that these strategies lead to significant improvement in inference performance and energy efficiency.