Analysis of Quantization Effects on Higher Order Function and Multilayer Feedforward Neural Networks

In this chapter we investigate the combined effects of quantization and clipping on Higher Order function neural networks (HOFNN) and multilayer feedforward neural networks (MLFNN). Statistical models are used to analyze the effects of quantization in a digital implementation. We analyze the performance degradation caused as a function of the number of fixed-point and floating-point quantization bits under the assumption of different probability distributions for the quantized variables, and then compare the training performance between situations with and without weight clipping, and derive in detail the effect of the quantization error on forward and backward propagation. No matter what distribution the initial weights comply with, the weights distribution will approximate a normal distribution for the training of floating-point or high-precision fixed-point quantization. Only when the number of quantization bits is very low, the weights distribution may cluster to ±1 for the training with fixed-point quantization. We establish and analyze the relationships for a true nonlinear neuron between inputs and outputs bit resolution, training and quantization methods, the number of network layers, network order and performance degradation, all based on statistical models, and for on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis. DOI: 10.4018/978-1-61520-711-4.ch008

[1]  Yun Xie,et al.  Analysis of the effects of quantization in multilayer neural networks using a statistical model , 1992, IEEE Trans. Neural Networks.

[2]  Jerzy B. Lont Analog CMOS implementation of a multi-layer perceptron with nonlinear synapses , 1992, IEEE Trans. Neural Networks.

[3]  Marco Gori,et al.  Optimal convergence of on-line backpropagation , 1996, IEEE Trans. Neural Networks.

[4]  Georges G. E. Gielen,et al.  A fast learning algorithm for time-delay neural networks , 2002, Inf. Sci..

[5]  Anastasios N. Venetsanopoulos,et al.  Fast learning algorithms for neural networks , 1992 .

[6]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[7]  Abir Jaafar Hussain,et al.  Artificial Higher Order Pipeline Recurrent Neural Networks for Financial Time Series Prediction , 2009 .

[8]  Günhan Dündar,et al.  The effects of quantization on multilayer neural networks , 1995, IEEE Trans. Neural Networks.

[9]  Georges G. E. Gielen,et al.  The Effects of Quantization on Multi-Layer Feedforward Neural Networks , 2003, Int. J. Pattern Recognit. Artif. Intell..

[10]  Bo Zhang,et al.  Fast Learning Algorithms for Feedforward Neural Networks , 2004, Applied Intelligence.

[11]  Ming Zhang,et al.  Artificial Higher Order Neural Networks for Economics and Business , 2008 .

[12]  Leonardo Maria Reyneri,et al.  An Analysis on the Performance of Silicon Implementations of Backpropagation Algorithms for Artificial Neural Networks , 1991, IEEE Trans. Computers.

[13]  Etienne Barnard,et al.  Avoiding false local minima by proper initialization of connections , 1992, IEEE Trans. Neural Networks.

[14]  Shankar M. Krishnan,et al.  Neural Networks in Healthcare: Potential and Challenges , 2006 .

[15]  Hon Keung Kwan,et al.  Multilayer feedforward neural networks with single powers-of-two weights , 1993, IEEE Trans. Signal Process..

[16]  Kishan G. Mehrotra,et al.  Efficient classification for multiclass problems using modular neural networks , 1995, IEEE Trans. Neural Networks.

[17]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[18]  Tetsuro Itakura,et al.  Neuro chips with on-chip back-propagation and/or Hebbian learning , 1992 .

[19]  Ronald S. Gyurcsik,et al.  Toward a general-purpose analog VLSI neural network with on-chip learning , 1997, IEEE Trans. Neural Networks.

[20]  Hiroshi Yamamoto,et al.  Reduction of required precision bits for back-propagation applied to pattern recognition , 1993, IEEE Trans. Neural Networks.

[21]  Howard C. Card,et al.  Tolerance to analog hardware of on-chip learning in backpropagation networks , 1995, IEEE Trans. Neural Networks.

[22]  J. J. Paulos,et al.  Artificial neural networks using MOS analog multipliers , 1990 .

[23]  Sina Balkir,et al.  ANNSyS: an Analog Neural Network Synthesis System , 1999, Neural Networks.