High-performance simulation of neural networks

Artificial neural networks have been used for a wide range of problems in a variety of areas. The back-propagation algorithm is frequently used to train the network, but is time consuming when implemented on general purpose computers. This paper examines methods of simulating back-propagation neural networks on parallel systems to achieve high performance. The training of artificial neural networks consists of updating the weights in several nested loops. Parallel simulation methods may be classified based on which of these loops are executed in parallel. These methods are discussed and example implementations of these methods are described.

[1]  Bertil Svensson,et al.  Using and Designing Massively Parallel Computers for Artificial Neural Neural Networks , 1992, J. Parallel Distributed Comput..

[2]  Terrence J. Sejnowski,et al.  Parallel Networks that Learn to Pronounce English Text , 1987, Complex Syst..

[3]  Al Geist,et al.  Network-based concurrent computing on the PVM system , 1992, Concurr. Pract. Exp..

[4]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[5]  J. Tanomaru,et al.  General purpose MIMD computers and neural networks: three case studies , 1995, 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century.

[6]  Ralf Östermark A flexible multicomputer algorithm for artificial neural networks , 1996, Neural Networks.

[7]  Anil K. Jain,et al.  Artificial Neural Networks: A Tutorial , 1996, Computer.

[8]  P. Sunthar,et al.  The generalized proportional-integral-derivative (PID) gradient descent back propagation algorithm , 1995, Neural Networks.

[9]  A. C. Tsoi,et al.  Bit-serial systolic array implementation of a multilayer perceptron , 1993 .

[10]  Minesh B. Amin,et al.  A Scalable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures , 1994, IEEE Trans. Parallel Distributed Syst..

[11]  Training Hidden Units: the Generalized Delta Rule , .

[12]  George Cybenko Neural networks in computational science and engineering , 1996 .

[13]  Robert M. Farber,et al.  Efficiently modeling neural networks on massively parallel computers , 1993 .

[14]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[15]  N. Sundararajan,et al.  An analysis of network parallelism of backpropagation neural networks on a transputer array , 1995, 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century.

[16]  Farid U. Dowla,et al.  Backpropagation Learning for Multilayer Feed-Forward Neural Networks Using the Conjugate Gradient Method , 1991, Int. J. Neural Syst..

[17]  Shashi Shekhar,et al.  Customizing parallel formulations of backpropagation learning algorithm to neural network architectures: a summary of result , 1994, Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94.