Stochastic arithmetic implementations of neural networks with in situ learning

The implementation of artificial neural networks using stochastic arithmetic capable of in situ learning is described. Stochastic arithmetic uses values encoded as a pulse density, and allows addition, multiplication, and the nonlinearity to be implemented in a very small amount of digital hardware. A VLSI implementation of such a network is capable of processing 100000 training vectors per second. The performance of this architecture is demonstrated by two examples.<<ETX>>

[1]  Max Stanford Tomlinson,et al.  A digital neural network architecture for VLSI , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[2]  A. Owens,et al.  Efficient training of the backpropagation network by solving a system of stiff ordinary differential equations , 1989, International 1989 Joint Conference on Neural Networks.

[3]  Toshiyuki Furuta,et al.  Neural network LSI chip with on-chip learning , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[4]  Jay S. Patel,et al.  Factors influencing learning by backpropagation , 1988, IEEE 1988 International Conference on Neural Networks.

[5]  Brian R. Gaines,et al.  Stochastic Computing Systems , 1969 .

[6]  Alan F. Murray,et al.  Pulse-stream VLSI neural networks mixing analog and digital techniques , 1991, IEEE Trans. Neural Networks.

[7]  Peter Dirk Hortensius Parallel computation of non-deterministic algorithms in vlsi , 1987 .