Run-time reconfiguration is a way of more fully exploiting the flexbility of reconfigurable FPGAs. The run-time reconfiguration artificial neural network (RRANN) uses ran-time reconfiguration to increase the hardware density of FPGAs. The RRANN architecture also allows large amounts of parallelism to be used and is very scalable. RRANN divides the back-propagation algorithm into three sequential executed stages and configures the FPGAs to execute only one stage at a time. The FPGAs are reconfigured as part of normal execution in order to change stages. Using reconfigurability in this way increases the number of hardware neurons a single Xilinx XC3090 can implement by 500%. Performance is effected by reconfiguration overhead, but this overhead becomes insignificant in large networks. This overhead is made even more insignificant with improved configuration methods. Run-time reconfiguration is a flexible realization of the time/space trade-off. The RRANN architecture has been designed and built using commercially available hardware, and its performance has been measured.<<ETX>>
[1]
Kamran Eshraghian,et al.
Principles of CMOS VLSI Design: A Systems Perspective
,
1985
.
[2]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[3]
T. Watanabe,et al.
Neural network simulation on a massively parallel cellular array processor: AAP-2
,
1989,
International 1989 Joint Conference on Neural Networks.
[4]
Neil Weste,et al.
Principles of CMOS VLSI Design
,
1985
.
[5]
D. Hammerstrom,et al.
A VLSI architecture for high-performance, low-cost, on-chip learning
,
1990,
1990 IJCNN International Joint Conference on Neural Networks.