Massively parallel architectures for large scale neural network simulations

A toroidal lattice architecture (TLA) and a planar lattice architecture (PLA) are proposed as massively parallel neurocomputer architectures for large-scale simulations. The performance of these architectures is almost proportional to the number of node processors, and they adopt the most efficient two-dimensional processor connections for WSI implementation. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large-scale neural network simulations. The general neuron model is defined. Implementation of the TLA with transputers is described. A Hopfield neural network and a multilayer perceptron have been implemented and applied to the traveling salesman problem and to identity mapping, respectively. Proof that the performance increases almost in proportion to the number of node processors is given.

[1]  Guy E. Blelloch,et al.  Network Learning on the Connection Machine , 1987, IJCAI.

[2]  Bertil Svensson,et al.  Execution of neural network algorithms on an array of bit-serial processors , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[3]  A. Hodgkin,et al.  A quantitative description of membrane current and its application to conduction and excitation in nerve , 1990 .

[4]  M. Yagyu,et al.  Design, fabrication and evaluation of a 5-inch wafer scale neural network LSI composed on 576 digital neurons , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[5]  S. Tomboulian,et al.  Neural Network Simulation on the MasPar MP-1 Massively Parallel Processor , 1990 .

[6]  T. Watanabe,et al.  Neural network simulation on a massively parallel cellular array processor: AAP-2 , 1989, International 1989 Joint Conference on Neural Networks.

[7]  N. Fukuda,et al.  An enhanced parallel toroidal lattice architecture for large scale neural networks , 1989, International 1989 Joint Conference on Neural Networks.

[8]  A. Iwata,et al.  An artificial neural network accelerator using general purpose 24 bit floating point digital signal processors , 1989, International 1989 Joint Conference on Neural Networks.

[9]  Anargyros Krikelis,et al.  Implementing Neural Networks with the Associative String Processor , 1991 .

[10]  D. S. Touretzky,et al.  Neural network simulation at Warp speed: how we got 17 million connections per second , 1988, IEEE 1988 International Conference on Neural Networks.

[11]  D. Hammerstrom,et al.  A VLSI architecture for high-performance, low-cost, on-chip learning , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[12]  S. Y. King Parallel architectures for artificial neural nets , 1988, [1988] Proceedings. International Conference on Systolic Arrays.

[13]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[14]  Y. Fujimoto An enhanced parallel planar lattice architecture for large scale neural network simulations , 1990 .