The appearance of fast reconfigurable FPGA circuits brings about new paths for the design of neuroprocessors. A learning algorithm is divided into different steps that are associated with specific FPGA configurations. The training process then consists of alternating computing and reconfiguration stages. Such a method leads to an optimal use of hardware resources. This new method is applied to the design of a neuroprocessor implementing multilayer perceptrons with on-chip training and pruning. All arithmetic operations are carried out with fixed-point numbers. The first step of our work is the simulation of limited precision training and pruning algorithms. Our experiments demonstrate that this representation is well suited for this task. This paper also presents the principles of our hardware implementation, focusing in particular on the pruning mechanisms.
[1]
Masumi Ishikawa,et al.
Structural learning with forgetting
,
1996,
Neural Networks.
[2]
Eduardo Sanchez.
Field Programmable Gate Array (FPGA) Circuits
,
1995,
Towards Evolvable Hardware.
[3]
Russell Reed,et al.
Pruning algorithms-a survey
,
1993,
IEEE Trans. Neural Networks.
[4]
Hiroshi Yamamoto,et al.
Reduction of required precision bits for back-propagation applied to pattern recognition
,
1993,
IEEE Trans. Neural Networks.
[5]
Torsten Lehmann,et al.
Nonlinear backpropagation: doing backpropagation without derivatives of the activation function
,
1997,
IEEE Trans. Neural Networks.
[6]
Ferdinand Hergert,et al.
Improving model selection by nonconvergent methods
,
1993,
Neural Networks.