Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial Decomposition

With growing model complexity, mapping Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware is becoming increasingly challenging. This is because the synaptic storage resources on a tile, viz. a crossbar, can accommodate only a fixed number of pre-synaptic connections per post-synaptic neuron. For complex SNN models that have many pre-synaptic connections per neuron, some connections may need to be pruned after training to fit onto the tile resources, leading to a loss in model quality, e.g., accuracy. In this work, we propose a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units, where each neural unit is a function computation node, with two pre-synaptic connections. This spatial decomposition technique significantly improves crossbar utilization and retains all pre-synaptic connections, resulting in no loss of the model quality derived from connection pruning. We integrate the proposed technique within an existing SNN mapping framework and evaluate it using machine learning applications on the DYNAP-SE state-of-the-art neuromorphic hardware. Our results demonstrate an average 60% lower crossbar requirement, 9x higher synapse utilization, 62% lower wasted energy on the hardware, and between 0.8% and 4.6% increase in model quality.

[1]  Anup Das,et al.  Reliability-Performance Trade-offs in Neuromorphic Computing , 2020, 2020 11th International Green and Sustainable Computing Workshops (IGSC).

[2]  Francky Catthoor,et al.  Mapping of local and global synapses on spiking neuromorphic hardware , 2018, 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[3]  Francky Catthoor,et al.  Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware , 2020, Journal of Signal Processing Systems.

[4]  Akash Kumar,et al.  Dataflow-Based Mapping of Spiking Neural Networks on Neuromorphic Hardware , 2018, ACM Great Lakes Symposium on VLSI.

[5]  Nagarajan Kandasamy,et al.  Compiling Spiking Neural Networks to Neuromorphic Hardware , 2020, LCTES.

[6]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..

[7]  Wenguang Chen,et al.  Bridge the Gap between Neural Networks and Neuromorphic Hardware with a Neural Network Compiler , 2017, ASPLOS.

[8]  Nagarajan Kandasamy,et al.  Enabling and Exploiting Partition-Level Parallelism (PALP) in Phase Change Memories , 2019, ACM Trans. Embed. Comput. Syst..

[9]  P. Debacker,et al.  Design-technology co-optimization for OxRRAM-based synaptic processing unit , 2017, 2017 Symposium on VLSI Technology.

[10]  Nikil D. Dutt,et al.  PyCARL: A PyNN Interface for Hardware-Software Co-Simulation of Spiking Neural Network , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).

[11]  Francky Catthoor,et al.  Heartbeat Classification in Wearables Using Multi-layer Perceptron and Time-Frequency Joint Distribution of ECG , 2018, 2018 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE).

[12]  Wofgang Maas,et al.  Networks of spiking neurons: the third generation of neural network models , 1997 .

[13]  Nikil D. Dutt,et al.  CARLsim 4: An Open Source Library for Large Scale, Biologically Detailed Spiking Neural Network Simulation using Heterogeneous Clusters , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[14]  Francky Catthoor,et al.  Very Large-Scale Neuromorphic Systems for Biological Signal Processing , 2018 .

[15]  Nikil D. Dutt,et al.  A Recurrent Neural Network Based Model of Predictive Smooth Pursuit Eye Movement in Primates , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[16]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[17]  Nagarajan Kandasamy,et al.  Exploiting inter- and intra-memory asymmetries for data mapping in hybrid tiered-memories , 2020, ISMM.

[18]  Nikil D. Dutt,et al.  Mapping Spiking Neural Networks to Neuromorphic Hardware , 2019, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[19]  Adarsha Balaji,et al.  A Framework for the Analysis of Throughput-Constraints of SNNs on Neuromorphic Hardware , 2019, 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).

[20]  Nagarajan Kandasamy,et al.  A Framework to Explore Workload-Specific Performance and Lifetime Trade-offs in Neuromorphic Computing , 2019, IEEE Computer Architecture Letters.

[21]  Nagarajan Kandasamy,et al.  Improving phase change memory performance with data content aware access , 2020, ISMM.

[22]  Nagarajan Kandasamy,et al.  Aging-Aware Request Scheduling for Non-Volatile Main Memory , 2020, 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC).

[23]  Giacomo Indiveri,et al.  A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs) , 2017, IEEE Transactions on Biomedical Circuits and Systems.

[24]  Anup Das,et al.  Thermal-Aware Compilation of Spiking Neural Networks to Neuromorphic Hardware , 2020, LCPC.

[25]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[26]  Wolfgang Maass,et al.  Networks of Spiking Neurons: The Third Generation of Neural Network Models , 1996, Electron. Colloquium Comput. Complex..

[27]  Romain Brette,et al.  The Brian Simulator , 2009, Front. Neurosci..

[28]  Shihao Song,et al.  A Case for Lifetime Reliability-Aware Neuromorphic Computing , 2020, 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS).

[29]  Francky Catthoor,et al.  Power-Accuracy Trade-Offs for Heartbeat Classification on Neural Networks Hardware , 2018, J. Low Power Electron..

[30]  Nikil D. Dutt,et al.  Unsupervised Heart-rate Estimation in Wearables With Liquid States and A Probabilistic Readout , 2017, Neural Networks.

[31]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.