To fully exploit the real-time computational capabilities of neural networks (NN) -- as applied to image processing applications -- a high performance VMEbus based analog neurocomputing architecture (VMENA) is developed. The inherent parallelism of an analog VLSI NN embodiment enables a fully parallel and hence high speed and high-throughput hardware implementation of NN architectures. The VMEbus interface is specifically chosen to overcome the limited bandwidth of the PC host computer industrial standard architecture (ISA) bus. The NN board is built around cascadable VLSI NN chips (32 X 32 synapse chips and 32 X 32 neuron/synapse composite chips) for a total of 64 neurons and over 8 K synapses. Under software control, the system architecture could be flexibly reconfigured from feedback to feedforward and vice versa, and once selected, the NN topology (i.e. the number of neurons per input, hidden, and output layer and the number of layers) could be carved out from the set of neuron and synapse resources. An efficient hardware-in-the-loop cascade backpropagation (CBP) learning algorithm is implemented on the hardware. This supervised learning algorithm allows the network architecture to dynamically evolve by adding hidden neurons while modulating their synaptic weights using standard gradient-descent backpropagation. As a demonstration, the NN hardware system is applied to a computationally intensive map-data classification problem. Training sets ranging in size from 50 to 2500 pixels are utilized to train the network, and the best result for the hardware-in-the-loop learning is found to be comparable to the best result of the software NN simulation. Once trained, the VMENA subsystem is capable of processing at approximately 75,000 feedforward passes/second, resulting in over twofold computational throughput improvement relative to the ISAbus based neural network architecture.
[1]
Taher Daud,et al.
Learning in neural networks: VLSI implementation strategies
,
1996
.
[2]
Raoul Tawel.
Learning in analog neural network hardware
,
1993
.
[3]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[4]
F. A. Seiler,et al.
Numerical Recipes in C: The Art of Scientific Computing
,
1989
.
[5]
T. Duong,et al.
Learning and optimization with cascaded VLSI neural network building-block chips
,
1992,
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks.
[6]
Silvio P. Eberhardt,et al.
Considerations For Hardware Implementations Of Neural Networks
,
1988,
Twenty-Second Asilomar Conference on Signals, Systems and Computers.
[7]
Timothy X. Brown,et al.
Cascaded VLSI neural network chips: Hardware learning for pattern recognition and classification
,
1992,
Simul..
[8]
John S. Carson,et al.
Low power analog neurosynapse chips for a 3-D "sugarcube" neuroprocessor
,
1994,
Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).