Limits to neural computations in digital arrays

In this paper the properties of artificial neural network computations by digital VLSI systems are discussed. We also comment on artificial computational models, learning algorithms, and digital implementations of ANNs in general. The analysis applies to regular arrays or processing elements performing binary integer arithmetic at various bit precisions. Computation rates are limited by power dissipation which is dependent upon required precision and packaging constraints such as pinout. They also depend strongly on the minimum feature size of the CMOS technology. We emphasize custom digital implementations with low bit precision, because these circuits require reduced power and silicon area. One way this may be achieved is using stochastic arithmetic, with pseudorandom number generation based on cellular automata circuits.

[1]  Robert B. Allen,et al.  Relaxation Networks for Large Supervised Learning Problems , 1990, NIPS.

[2]  Stuart Mackie,et al.  A digital implementation of a best match classifier , 1988, Proceedings of the IEEE 1988 Custom Integrated Circuits Conference.

[3]  Howard C. Card,et al.  Parallel Random Number Generation for VLSI Systems Using Cellular Automata , 1989, IEEE Trans. Computers.

[4]  Brian R. Gaines,et al.  Stochastic Computing Systems , 1969 .

[5]  Alan F. Murray,et al.  Bit-Serial Neural Networks , 1987, NIPS.

[6]  Howard C. Card Ultrahigh Density VLSI Inner Product Computations , 1996, J. Circuits Syst. Comput..

[7]  Geoffrey E. Hinton,et al.  The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.

[8]  Howard C. Card,et al.  Stochastic arithmetic implementations of neural networks with in situ learning , 1993, IEEE International Conference on Neural Networks.

[9]  Howard Card,et al.  Digital VLSI backpropagation networks , 1995, Canadian Journal of Electrical and Computer Engineering.

[10]  Carver Mead,et al.  Analog VLSI and neural systems , 1989 .

[11]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[12]  H. C. Card,et al.  Analog CMOS deterministic Boltzmann circuits , 1993 .

[13]  S. Grossberg,et al.  A self-organizing neural network for supervised learning, recognition, and prediction , 1992, IEEE Communications Magazine.

[14]  R. O. Grondin,et al.  VLSI Implementation of Neural Classifiers , 1990, Neural Computation.

[15]  Y. Hirai A PDM digital neural network system with 1000 neurons fully interconnected via 1000000 6-bit synapses , 1996 .

[16]  Howard C. Card,et al.  Competitive learning algorithms in adaptive educational toys , 1997 .

[17]  K. Wojtek Przytula Parallel digital implementations of neural networks , 1991, Proceedings of the International Conference on Application Specific Array Processors.

[18]  Alan F. Murray,et al.  Pulse-stream VLSI neural networks mixing analog and digital techniques , 1991, IEEE Trans. Neural Networks.

[19]  Takeshi Sakata,et al.  A single 1.5-V digital chip for a 106 synapse neural network , 1993, IEEE Trans. Neural Networks.

[20]  D. Hammerstrom,et al.  A VLSI architecture for high-performance, low-cost, on-chip learning , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[21]  Mohamed I. Elmasry,et al.  The digi-neocognitron: a digital neocognitron neural network model for VLSI , 1992, IEEE Trans. Neural Networks.

[22]  Patrick Thiran,et al.  Quantization effects in digitally behaving circuit implementations of Kohonen networks , 1994, IEEE Trans. Neural Networks.

[23]  Kimmo Kaski,et al.  Pulse-density modulation technique in VLSI implementations of neural network algorithms , 1990 .

[24]  Jenq-Neng Hwang,et al.  A unifying algorithm/architecture for artificial neural networks , 1989, International Conference on Acoustics, Speech, and Signal Processing,.