The design of an analog VLSI neural network processor for scientific and engineering applications such as pattern recognition and image compression is described. The backpropagation and self-organization learning schemes in artificial neural networks require high-precision multiplication and summation. The analog neural network design presented performs high-speed feedforward computation in parallel. A digital signal processor or a host computer can be used for updating of synapse weights during the learning phase. The analog computing blocks consist of a synapse matrix and the input and output neuron arrays. The output neuron is composed of a current-to-voltage converter and a sigmoid function generator with a controllable voltage gain. An improved Gilbert multiplier is used for the synapse design. The input and output neurons are tailored to reduce the network settling time and minimize the silicon area that is used for implementation.<<ETX>>
[1]
C. Tomovich,et al.
MOSIS - A gateway to silicon
,
1988,
IEEE Circuits and Devices Magazine.
[2]
Rodney M. Goodman,et al.
A real-time neural system for color constancy
,
1991,
IEEE Trans. Neural Networks.
[3]
John J. Paulos,et al.
The Effects of Precision Constraints in a Backpropagation Learning Network
,
1990,
Neural Computation.
[4]
Carver Mead,et al.
Analog VLSI and neural systems
,
1989
.
[5]
Teuvo Kohonen,et al.
The self-organizing map
,
1990
.
[6]
Yannis Tsividis,et al.
Floating voltage-controlled resistors in CMOS technology
,
1982
.