Bayesian-based Hyperparameter Optimization for Spiking Neuromorphic Systems

Designing a neuromorphic computing system involves selection of several hyperparameters that not only affect the accuracy of the framework, but also the energy efficiency and speed of inference and training. These hyperparameters might be inherent to the training of the spiking neural network (SNN), the input/output encoding of the real-world data to spikes, or the underlying neuromorphic hardware. In this work, we present a Bayesian-based hyperparameter optimization approach for spiking neuromorphic systems, and we show how this optimization framework can lead to significant improvement in designing accurate neuromorphic computing systems. In particular, we show that this hyperparameter optimization approach can discover the same optimal hyperparameter set for input encoding as a grid search, but with far fewer evaluations and far less time. We also show the impact of hardware-specific hyperparameters on the performance of the system, and we demonstrate that by optimizing these hyperparameters, we can achieve significantly better application performance.

[1]  Craig M. Vineyard,et al.  Training deep neural networks for binary communication with the Whetstone method , 2018, Nature Machine Intelligence.

[2]  Dejan S. Milojicic,et al.  PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference , 2019, ASPLOS.

[3]  Leon O. Chua,et al.  Neuromemristive Circuits for Edge Computing: A Review , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[4]  Catherine D. Schuman,et al.  Non-Traditional Input Encoding Schemes for Spiking Neuromorphic Systems , 2019, 2019 International Joint Conference on Neural Networks (IJCNN).

[5]  Catherine D. Schuman,et al.  An evolutionary optimization framework for neural networks and neuromorphic architectures , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[6]  Parami Wijesinghe,et al.  Analysis of Liquid Ensembles for Enhancing the Performance and Accuracy of Liquid State Machines , 2019, Front. Neurosci..

[7]  Kaushik Roy,et al.  Going Deeper in Spiking Neural Networks: VGG and Residual Architectures , 2018, Front. Neurosci..

[8]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[9]  Kaushik Roy,et al.  PABO: Pseudo Agent-Based Multi-Objective Bayesian Hyperparameter Optimization for Efficient Neural Accelerator Design , 2019, 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[10]  Frank Hutter,et al.  CMA-ES for Hyperparameter Optimization of Deep Neural Networks , 2016, ArXiv.

[11]  A. P. Wieland,et al.  Evolving neural network controllers for unstable systems , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[12]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[13]  Damien Querlioz,et al.  Simulation of a memristor-based spiking neural network immune to device variations , 2011, The 2011 International Joint Conference on Neural Networks.

[14]  Don Monroe,et al.  Neuromorphic computing gets ready for the (really) big time , 2014, CACM.

[15]  Bernard Brezzo,et al.  TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[16]  Catherine D. Schuman,et al.  A Survey of Neuromorphic Computing and Neural Networks in Hardware , 2017, ArXiv.

[17]  Mike E. Davies,et al.  Benchmarks for progress in neuromorphic computing , 2019, Nature Machine Intelligence.

[18]  Kaushik Roy,et al.  Enabling Spike-based Backpropagation in State-of-the-art Deep Neural Network Architectures , 2019 .

[19]  Yoshua Bengio,et al.  Algorithms for Hyper-Parameter Optimization , 2011, NIPS.

[20]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..

[21]  Catherine D. Schuman,et al.  The TENNLab Exploratory Neuromorphic Computing Framework , 2018, IEEE Letters of the Computer Society.

[22]  Risto Miikkulainen,et al.  Efficient Non-linear Control Through Neuroevolution , 2006, ECML.

[23]  Kwabena Boahen,et al.  Learning in Silicon: Timing is Everything , 2005, NIPS.

[24]  Mark E. Dean,et al.  DANNA 2: Dynamic Adaptive Neural Network Arrays , 2018, Proceedings of the International Conference on Neuromorphic Systems.

[25]  Yoshua Bengio,et al.  An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.