PRECISION OF COMPUTATIONS IN ANALOG NEURAL NETWORKS
暂无分享,去创建一个
VLSI implementations of analog neural networks have been strongly investigated during the last five years. Except some specific realizations where the precision and the adaptation rule are more important than the size of the network [1] [2], most applications of neural networks require large arrays of neurons and synapses. The fan-out of the neuron is not the crucial point: digital or analog neuron can be easily designed so that they can drive a large number of synapse inputs (in the next layer in the case of multi-layered networks, in the same layer in the case of feedback networks). Fan-in is more important: whatever is the transmission mode of information between synapses and neurons (voltage, current, pulses,…) the neuron input must have a large dynamics if it is connected to hundreds of synapses. Digital neurons are of course the solution: if the dynamics of the neuron inputs has to be increased, more bits will be used and the required precision will be obtained. However, digital cells are in general much larger than their analog counterpart: for example, a neuron connected to 100 synapses must contain a digital adder with 100 inputs, each of them coded in several bits. The silicium area occupied by the cells and the connections between cells will be incompatible with the integration of a large number of synapses and neurons on a single chip.
[1] Massimo A. Sivilotti,et al. Real-time visual computations using analog CMOS processing arrays , 1987 .
[2] Alan F. Murray,et al. Pulse arithmetic in VLSI neural networks , 1989, IEEE Micro.
[3] Eric A. Vittoz,et al. CMOS Integration of Herault-Jutten Cells for Separation of Sources , 1989, Analog VLSI Implementation of Neural Systems.
[4] Michel Verleysen,et al. Neural networks for high-storage content-addressable memory: VLSI circuit and learning algorithm , 1989 .