Multi-Objective Optimization for Size and Resilience of Spiking Neural Networks

Inspired by the connectivity mechanisms in the brain, neuromorphic computing architectures model Spiking Neural Networks (SNNs) in silicon. As such, neuromorphic architectures are designed and developed with the goal of having small, low power chips that can perform control and machine learning tasks. However, the power consumption of the developed hardware can greatly depend on the size of the network that is being evaluated on the chip. Furthermore, the accuracy of a trained SNN that is evaluated on chip can change due to voltage and current variations in the hardware that perturb the learned weights of the network. While efforts are made on the hardware side to minimize those perturbations, a software based strategy to make the deployed networks more resilient can help further alleviate that issue. In this work, we study Spiking Neural Networks in two neuromorphic architecture implementations with the goal of decreasing their size, while at the same time increasing their resiliency to hardware faults. We leverage an evolutionary algorithm to train the SNNs and propose a multiobjective fitness function to optimize the size and resiliency of the SNN. We demonstrate that this strategy leads to well-performing, small-sized networks that are more resilient to hardware faults.

[1]  Catherine D. Schuman,et al.  An evolutionary optimization framework for neural networks and neuromorphic architectures , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[2]  Catherine D. Schuman,et al.  Biomimetic, Soft-Material Synapse for Neuromorphic Computing: from Device to Network , 2018, 2018 IEEE 13th Dallas Circuits and Systems Conference (DCAS).

[3]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[4]  Andrew S. Cassidy,et al.  A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.

[5]  Patrick P. K. Chan,et al.  MLPNN Training via a Multiobjective Optimization of Training Error and Stochastic Sensitivity , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[6]  Dharmendra S. Modha,et al.  Backpropagation for Energy-Efficient Neuromorphic Computing , 2015, NIPS.

[7]  Yuan Xie,et al.  Learning the sparsity for ReRAM: mapping and pruning sparse neural network for ReRAM based accelerator , 2019, ASP-DAC.

[8]  Pan Zheng,et al.  Multi-Objective Evolutionary Algorithm Based on Decomposition for Energy-aware Scheduling in Heterogeneous Computing Systems , 2017, J. Univers. Comput. Sci..

[9]  Tobi Delbrück,et al.  Training Deep Spiking Neural Networks Using Backpropagation , 2016, Front. Neurosci..

[10]  Fei Xie,et al.  A survey on the industrial readiness for Internet of Things , 2017, 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON).

[11]  Wenguang Chen,et al.  NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints , 2016, 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).

[12]  Thomas C. Südhof,et al.  Understanding Synapses: Past, Present, and Future , 2008, Neuron.

[13]  Jim Harkin,et al.  Homeostatic Fault Tolerance in Spiking Neural Networks: A Dynamic Hardware Perspective , 2018, IEEE Transactions on Circuits and Systems I: Regular Papers.

[14]  Zibouda Aliouat,et al.  Challenges and research directions for Internet of Things , 2017, Telecommunication Systems.

[15]  Frank Mueller,et al.  Exploiting Data Representation for Fault Tolerance , 2014, 2014 5th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems.

[16]  Jaeyong Chung,et al.  Simplifying deep neural networks for neuromorphic architectures , 2016, 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC).

[17]  Rodrigo Alvarez-Icaza,et al.  Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations , 2014, Proceedings of the IEEE.

[18]  Subhajit Chatterjee,et al.  Internet of Things and Body area network-an integrated future , 2017, 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON).

[19]  Mark E. Dean,et al.  DANNA 2: Dynamic Adaptive Neural Network Arrays , 2018, Proceedings of the International Conference on Neuromorphic Systems.

[20]  Catherine D. Schuman,et al.  Stochasticity and Robustness in Spiking Neural Networks , 2019, Neurocomputing.

[21]  Catherine D. Schuman,et al.  A Survey of Neuromorphic Computing and Neural Networks in Hardware , 2017, ArXiv.

[22]  Jianxin Wu,et al.  ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[23]  Mihaela Dimovska,et al.  A Novel Pruning Method for Convolutional Neural Networks Based off Identifying Critical Filters , 2019, PEARC.

[24]  Jim D. Garside,et al.  SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation , 2013, IEEE Journal of Solid-State Circuits.

[25]  Catherine D. Schuman,et al.  The TENNLab Exploratory Neuromorphic Computing Framework , 2018, IEEE Letters of the Computer Society.

[26]  Andrew Zisserman,et al.  Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.

[27]  Catherine D. Schuman,et al.  Neuroscience-inspired inspired dynamic architectures , 2014, Proceedings of the 2014 Biomedical Sciences and Engineering Conference.

[28]  Farzad Samie,et al.  IoT technologies for embedded computing: A survey , 2016, 2016 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS).

[29]  Catherine D. Schuman,et al.  A Comparison of Neuromorphic Classification Tasks , 2018 .