A Systolic Array Exploiting the Inherent Parallelisms of Artificial Neural Networks

Abstract The systolic array implementation of artificial neural networks is one of the best solutions to the communication problems generated by the highly interconnected neurons. In this paper, a two-dimensional systolic array for backpropagation neural network is presented. The design is based on the classical systolic algorithm of matrix-by-vector multiplication, and exploits the inherent parallelisms of backpropagation neural networks. This design executes the forward and backward passes in parallel, and exploits the pipelined parallelism of multiple patterns in each pass. The estimated performance of this design shows that the pipelining of multiple patterns is an important factor in VLSI neural network implementations.

[1]  S. Y. Kung,et al.  Parallel architectures for artificial neural nets , 1988, IEEE 1988 International Conference on Neural Networks.

[2]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[3]  Christian Lehmann,et al.  A Systolic Implementation of the Self Organization Algorithm , 1990 .

[4]  Philippe Hurat,et al.  A VLSI Systolic Array Dedicated to Hopfield Neural Network , 1989 .

[5]  Terrence J. Sejnowski,et al.  Parallel Networks that Learn to Pronounce English Text , 1987, Complex Syst..

[6]  W. Raab,et al.  Fine-grain system architectures for systolic emulation of neural algorithms , 1990, [1990] Proceedings of the International Conference on Application Specific Array Processors.

[7]  D. S. Touretzky,et al.  Neural network simulation at Warp speed: how we got 17 million connections per second , 1988, IEEE 1988 International Conference on Neural Networks.

[8]  Guy E. Blelloch,et al.  Network Learning on the Connection Machine , 1987, IJCAI.

[9]  Alexander Singer Exploiting the Inherent Parallelism of Artificial Neural Networks to Achieve 1300 Million Interconnects per Second , 1990 .

[10]  José A. B. Fortes,et al.  Performance of Connectionist Learning Algorithms on 2-D SIMD Processor Arrays , 1989, NIPS.

[11]  H. T. Kung Why systolic architectures? , 1982, Computer.

[12]  A. Hiraiwa,et al.  Implementation of ANN on RISC processor array , 1990, [1990] Proceedings of the International Conference on Application Specific Array Processors.

[13]  Bernard Faure,et al.  Implementation of Back-Propagation on a VLSI Asynchronous Cellular Architecture , 1990 .

[14]  C. Chen,et al.  Systolic array implementations of neural nets on the MasPar MP-1 massively parallel processor , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[15]  Jill P. Mesirov,et al.  An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2 , 1989, NIPS.

[16]  S. Y. King Parallel architectures for artificial neural nets , 1988, [1988] Proceedings. International Conference on Systolic Arrays.

[17]  Shekhar Y. Borkar,et al.  iWarp: an integrated solution to high-speed parallel computing , 1988, Proceedings. SUPERCOMPUTING '88.

[18]  R. Hecht-Nielsen,et al.  Neurocomputing: picking the human brain , 1988, IEEE Spectrum.