A parallel bus architecture for artificial neural networks

A new design for a bus architecture for stochastic artifical networks is discussed. A recent VLSI implementation connects many neurons by broadcasting each neuron's address and activation level in turn for all other neurons to process. Such a scheme requires N steps to completely connect N neurons. The proposed architecture uses stochastic activation levels. Since these outputs are simpler, there is room on the global bus for several neurons to fire in parallel. Each neuron processes all outputs in a set of neurons at once, reducing the number of addressing steps on the bus as well as the actual size of the neuron addressing field. This neuron grouping is especially applicable to backpropagation networks. A simulator for the architecture was written and tested.<<ETX>>

[1]  A. Masaki,et al.  Neural networks in CMOS: a case study , 1990, IEEE Circuits and Devices Magazine.

[2]  David E. van den Bout,et al.  A stochastic architecture for neural nets , 1988, IEEE 1988 International Conference on Neural Networks.

[3]  J. Bailey,et al.  Why VLSI implementations of associative VLCNs require connection multiplexing , 1988, IEEE 1988 International Conference on Neural Networks.

[4]  Klaus Schumacher,et al.  VLSI technologies for artificial neural networks , 1989, IEEE Micro.

[5]  Thomas K. Miller,et al.  A digital architecture employing stochasticism for the simulation of Hopfield neural nets , 1989 .

[6]  M. Vellasco,et al.  VLSI architectures for neural networks , 1989, IEEE Micro.

[7]  L.D. Jackel,et al.  Analog electronic neural network circuits , 1989, IEEE Circuits and Devices Magazine.

[8]  Y. Suzuki,et al.  Digital systems for artificial neural networks , 1989, IEEE Circuits and Devices Magazine.

[9]  J.J. Hopfield,et al.  Artificial neural networks , 1988, IEEE Circuits and Devices Magazine.