A digital neural network architecture for VLSI
暂无分享,去创建一个
An approach to solving the two most serious shortcomings of previous artificial neural network implementations is discussed. A flexible architecture that permits the realization of arbitrary network topologies and dimensions is presented. Furthermore, the performance of this architecture is independent of the size of the network and permits the processing of typically 100000 patterns per second. The key innovation is the representation of neuron activations and synaptic weights as stochastic functions of time, leading to efficient implementations of the synapses. High densities of synapses per silicon area, exceeding even analog implementations, have been achieved. Finally, the neuron activations are represented digitally, as are the synaptic computations, thereby permitting fabrication of digital neural network architectures using a variety of standard, low-cost semiconductor processes. A pair of general-purpose chips (SU3232 and NU32) that permit post facto construction of neural networks of arbitrary topology and virtually unlimited dimensions is presented
[1] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[2] R. Lippmann,et al. An introduction to computing with neural nets , 1987, IEEE ASSP Magazine.
[3] Richard P. Lippmann,et al. An introduction to computing with neural nets , 1987 .
[4] David E. Rumelhart,et al. Implementing neural networks , 1988 .
[5] Carver Mead,et al. Analog VLSI and neural systems , 1989 .