Backpropagation Analysis of the Limited Precision on High-Order Function Neural Networks

Quantization analysis of the limited precision is widely used in the hardware realization of neural networks. Due to the most neural computations are required in the training phase, the effects of quantization are more significant in this phase. We pay attention and analyze backpropagation training and recall of the limited precision on the HOFNN, point out the potential problems and the performance sensitivity with lower-bit quantization. We compare the training performances with and without weight clipping, derive the effects of the quantization error on backpropagation for on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis.

[1]  Marco Gori,et al.  Optimal convergence of on-line backpropagation , 1996, IEEE Trans. Neural Networks.

[2]  Georges G. E. Gielen,et al.  The Effects of Quantization on Multi-Layer Feedforward Neural Networks , 2003, Int. J. Pattern Recognit. Artif. Intell..

[3]  Georges Gielen,et al.  The effects of quantization on high order function neural networks , 2001, Neural Networks for Signal Processing XI: Proceedings of the 2001 IEEE Signal Processing Society Workshop (IEEE Cat. No.01TH8584).

[4]  Tetsuro Itakura,et al.  Neuro chips with on-chip back-propagation and/or Hebbian learning , 1992 .

[5]  Kishan G. Mehrotra,et al.  Efficient classification for multiclass problems using modular neural networks , 1995, IEEE Trans. Neural Networks.

[6]  Sina Balkir,et al.  ANNSyS: an Analog Neural Network Synthesis System , 1999, Neural Networks.