Neural Networks with Limited Precision Weights and Its Application in Embedded Systems

A type of optimized neural networks with limited precision weights (LPWNN) is presented in this paper. Such neural networks, which require less memory for storing the weights and less expensive floating point units in order to perform the computations involved, are better suited for embedded systems implementation than the real weight ones. Based on analyzing the learning capability of LPWNN, Quantize Back-propagation Step-by-Step (QBPSS) algorithm is proposed for such neural networks to overcome the effects of limited precision. Methods of designing and training LPWNN are represented, including the quantization of non-linear activation function and the selection of learning rate, network architecture and weights precision. The optimized LPWNN performance has been evaluated by comparing to conventional neural networks with double-precision floating-point weights on digital recognition in ARM embedded systems, and the results show the optimized LPWNN has 11 times faster than the conventional ones.

[1]  Mehrdad Sharif Bakhtiar,et al.  Analog feedforward neural networks with very low precision weights , 1995, Proceedings of ICNN'95 - International Conference on Neural Networks.

[2]  Bin Zhou,et al.  A Genetic-Algorithm-Based Weight Discretization Paradigm for Neural Networks , 2009, 2009 WRI World Congress on Computer Science and Information Engineering.

[3]  Roland Wilson,et al.  Integer-weight approximation of continuous-weight multilayer feedforward nets , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[4]  Zhou Bin,et al.  Research on Real-Time Image Sharpening Methods Based on Optimized Neural Network , 2009, 2009 Fifth International Conference on Natural Computation.

[5]  Zhou Bin,et al.  A New Learning Algorithm for Neural Networks with Integer Weights and Quantized Non-linear Activation Functions , 2008, IFIP AI.

[6]  Zhou Bin,et al.  Optimization of neural network with fixed-point weights and touch-screen calibration , 2009, 2009 4th IEEE Conference on Industrial Electronics and Applications.

[7]  Sorin Draghici,et al.  On the capabilities of neural networks using limited precision weights , 2002, Neural Networks.

[8]  Francesco Piazza,et al.  Fast neural networks without multipliers , 1993, IEEE Trans. Neural Networks.

[9]  Tzi-Dar Chiueh,et al.  Learning algorithms for neural networks with ternary weights , 1988, Neural Networks.

[10]  Guang-Bin Huang,et al.  Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions , 1998, IEEE Trans. Neural Networks.