Stochastic Neural Computation I: Computational Elements

This paper examines a number of stochastic computational elements employed in artificial neural networks, several of which are introduced for the first time, together with an analysis of their operation. We briefly include multiplication, squaring, addition, subtraction, and division circuits in both unipolar and bipolar formats, the principles of which are well-known, at least for unipolar signals. We have introduced several modifications to improve the speed of the division operation. The primary contribution of this paper, however, is in introducing several state machine-based computational elements for performing sigmoid nonlinearity mappings, linear gain, and exponentiation functions. We also describe an efficient method for the generation of, and conversion between, stochastic and deterministic binary signals. The validity of the present approach is demonstrated in a companion paper through a sample application, the recognition of noisy optical characters using soft competitive learning. Network generalization capabilities of the stochastic network maintain a squared error within 10 percent of that of a floating-point implementation for a wide range of noise levels. While the accuracy of stochastic computation may not compare favorably with more conventional binary radix-based computation, the low circuit area, power, and speed characteristics may, in certain situations, make them attractive for VLSI implementation of artificial neural networks.

[1]  H. Card Doubly stochastic Poisson processes in artificial neural learning , 1998, IEEE Trans. Neural Networks.

[2]  Leopoldo García Franquelo,et al.  Fully parallel stochastic computation architecture , 1996, IEEE Trans. Signal Process..

[3]  Howard C. Card,et al.  Stochastic arithmetic implementations of neural networks with in situ learning , 1993, IEEE International Conference on Neural Networks.

[4]  Antonio Torralba,et al.  Two digital circuits for a fully parallel stochastic neural network , 1995, IEEE Trans. Neural Networks.

[5]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[6]  Geoffrey E. Hinton,et al.  The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.

[7]  Yasuji Sawada,et al.  Functional abilities of a stochastic logic neural network , 1992, IEEE Trans. Neural Networks.

[8]  Jack L. Meador,et al.  Programmable impulse neural circuits , 1991, IEEE Trans. Neural Networks.

[9]  Alan F. Murray,et al.  Pulse-based computation in VLSI neural networks , 1999 .

[10]  Siyad C. Ma,et al.  Testability Features of the AMD-K6 Microprocessor , 1998, IEEE Des. Test Comput..

[11]  Wolfgang Maass,et al.  Fast Sigmoidal Networks via Spiking Neurons , 1997, Neural Computation.

[12]  Thomas K. Miller,et al.  A digital architecture employing stochasticism for the simulation of Hopfield neural nets , 1989 .

[13]  Howard C. Card,et al.  Parallel pseudorandom number generation in GaAs cellular automata for high speed circuit testing , 1995, J. Electron. Test..

[14]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[15]  John Shawe-Taylor,et al.  Learning in Stochastic Bit Stream Neural Networks , 1996, Neural Networks.

[16]  Howard C. Card,et al.  Parallel Random Number Generation for VLSI Systems Using Cellular Automata , 1989, IEEE Trans. Computers.

[17]  M. A. Mahowald,et al.  Evolving analog VLSI neurons , 1992 .

[18]  John G. Taylor,et al.  Learning Probabilistic RAM Nets Using VLSI Structures , 1992, IEEE Trans. Computers.

[19]  Max Stanford Tomlinson,et al.  A digital neural network architecture for VLSI , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[20]  Kimmo Kaski,et al.  Pulse-density modulation technique in VLSI implementations of neural network algorithms , 1990 .

[21]  Michael A. Shanblatt,et al.  Architecture and statistical model of a pulse-mode digital multilayer neural network , 1995, IEEE Trans. Neural Networks.

[22]  Howard C. Card,et al.  Stochastic Neural Computation II: Soft Competitive Learning , 2001, IEEE Trans. Computers.

[23]  Brian R. Gaines,et al.  Stochastic Computing Systems , 1969 .

[24]  Geoffrey E. Hinton,et al.  Learning and relearning in Boltzmann machines , 1986 .

[25]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[26]  Alan F. Murray,et al.  Pulse-stream VLSI neural networks mixing analog and digital techniques , 1991, IEEE Trans. Neural Networks.

[27]  Alan F. Murray,et al.  Asynchronous VLSI neural networks using pulse-stream arithmetic , 1988 .

[28]  John G. Elias,et al.  Artificial Dendritic Trees , 1993, Neural Computation.

[29]  Rodney J. Douglas,et al.  A pulse-coded communications infrastructure for neuromorphic systems , 1999 .

[30]  John G. Taylor,et al.  Generalization in probabilistic RAM nets , 1993, IEEE Trans. Neural Networks.

[31]  Douglas S. Reeves,et al.  The TInMANN VLSI chip , 1992, IEEE Trans. Neural Networks.

[32]  John Shawe-Taylor,et al.  Stochastic bit-stream neural networks , 1999 .