RNA: Reconfigurable LSTM Accelerator with Near Data Approximate Processing

Near Data Processing(NDP) techniques are introduced into deep learning accelerators as they can greatly relieve the pressure on memory bandwidth. Besides, approximate computing is also adopted in accelerating neural networks for the network fault-tolerance to reduce energy consumption. In this paper, an NDP accelerator with approximate computing features for LSTM is proposed to explore the data parallelism with reconfigurable features. Firstly, a hybrid-grained network partitioning model with scheduling strategy of LSTM is put forward to achieve high processing parallelism. Secondly, the approximate computing units are designed for LSTM with adaptive precision. Then the heterogeneous architecture, RNA, with reconfigurable computing arrays and approximate NDP units is proposed and implemented regarding the configuration code. The gates and cells in LSTM are modeled into fine-grained operations, organized in coarse-grained tasks, and then mapped onto RNA. In addition, approximate computing units are integrated into the NDP units with the adaptive precision, which is also controlled by the configuration codes. The proposed RNA architecture achieved 544 GOPS/W energy efficiency while processing LSTM, and further can be extended for larger and more complex recurrent neural networks. Comparing with the state-of-the-art accelerator for LSTM, it is 2.14 times better in efficiency.

[1]  Dong Wang,et al.  THCHS-30 : A Free Chinese Speech Corpus , 2015, ArXiv.

[2]  Yu Wang,et al.  FPGA Acceleration of Recurrent Neural Network Based Language Model , 2015, 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines.

[3]  Tao Zhang,et al.  PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[4]  Patricio Bulic,et al.  An iterative logarithmic multiplier , 2011, Microprocess. Microsystems.

[5]  Yajie Miao,et al.  EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding , 2015, 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU).

[6]  Christoforos E. Kozyrakis,et al.  TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory , 2017, ASPLOS.

[7]  Song Han,et al.  EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[8]  Tadahiro Kuroda,et al.  BRein memory: A 13-layer 4.2 K neuron/0.8 M synapse binary/ternary reconfigurable in-memory deep neural network accelerator in 65 nm CMOS , 2017, 2017 Symposium on VLSI Circuits.