Parallel Evolutionary Optimization for Neuromorphic Network Training

One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impact the performance of EO on Titan, the U.S.'s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi's. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.

[1]  Catherine D. Schuman,et al.  DANNA: A neuromorphic software ecosystem ☆ , 2016, BICA 2016.

[2]  Lawrence Davis,et al.  Training Feedforward Neural Networks Using Genetic Algorithms , 1989, IJCAI.

[3]  P. Merkey,et al.  Beowulf: harnessing the power of parallelism in a pile-of-PCs , 1997, 1997 IEEE Aerospace Conference.

[4]  Enrique Alba,et al.  Parallel Metaheuristics: A New Class of Algorithms , 2005 .

[5]  Bernabé Linares-Barranco,et al.  On Spike-Timing-Dependent-Plasticity, Memristive Devices, and Building a Self-Learning Visual Cortex , 2011, Front. Neurosci..

[6]  C.W. Anderson,et al.  Learning to control an inverted pendulum using neural networks , 1989, IEEE Control Systems Magazine.

[7]  Erick Cantú-Paz,et al.  Topologies, Migration Rates, and Multi-Population Parallel Genetic Algorithms , 1999, GECCO.

[8]  Xin Yao,et al.  Evolving artificial neural networks , 1999, Proc. IEEE.

[9]  Christian P. Robert,et al.  Monte Carlo Statistical Methods , 2005, Springer Texts in Statistics.

[10]  Don Monroe,et al.  Neuromorphic computing gets ready for the (really) big time , 2014, CACM.

[11]  Catherine D. Schuman,et al.  Dynamic adaptive neural network arrays: a neuromorphic architecture , 2015, MLHPC@SC.

[12]  Kenneth O. Stanley,et al.  HyperNEAT: The First Five Years , 2014, Growing Adaptive Machines.

[13]  Riccardo Poli,et al.  Parallel genetic algorithm taxonomy , 1999, 1999 Third International Conference on Knowledge-Based Intelligent Information Engineering Systems. Proceedings (Cat. No.99TH8410).

[14]  Tarek M. Taha,et al.  Enabling back propagation training of memristor crossbar neuromorphic processors , 2014, 2014 International Joint Conference on Neural Networks (IJCNN).

[15]  Dharmendra S. Modha,et al.  Backpropagation for Energy-Efficient Neuromorphic Computing , 2015, NIPS.

[16]  Enrique Alba,et al.  Parallel Genetic Algorithms , 2020, Studies in Computational Intelligence.

[17]  Catherine D. Schuman,et al.  Dynamic Adaptive Neural Network Array , 2014, UCNC.

[18]  Camp,et al.  Flappy Bird 早夭的肥鸟 , 2014 .

[19]  Giacomo Indiveri,et al.  Real-Time Classification of Complex Patterns Using Spike-Based Learning in Neuromorphic VLSI , 2009, IEEE Transactions on Biomedical Circuits and Systems.

[20]  Catherine D. Schuman,et al.  An evolutionary optimization framework for neural networks and neuromorphic architectures , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).