A New Learning Algorithm for Neural Networks with Integer Weights and Quantized Non-linear Activation Functions

The hardware implementation of neural networks is a fascinating area of research with for reaching applications. However, the real weights and non-linear activation function are not suited for hardware implementation. A new learning algorithm, which trains neural networks with integer weights and excludes derivatives from the training process, is presented in this paper. The performance of this procedure was evaluated by comparing to multi-threshold method and continuous discrete learning method on XOR and function approximation problems, and the simulation results show the new learning method outperforms the other two greatly in convergence and generalization.

[1]  H. John Caulfield,et al.  Weight discretization paradigm for optical neural networks , 1990, Other Conferences.

[2]  Vassilis P. Plagianakos,et al.  Training neural networks with threshold activation functions and constrained integer weights , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[3]  Tzi-Dar Chiueh,et al.  Learning algorithms for neural networks with ternary weights , 1988, Neural Networks.

[4]  Martin T. Hagan,et al.  Neural network design , 1995 .

[5]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[6]  Jay S. Patel,et al.  Factors influencing learning by backpropagation , 1988, IEEE 1988 International Conference on Neural Networks.

[7]  Emile Fiesler,et al.  Connectionist Quantization Functions , 1996 .

[8]  Evor L. Hines,et al.  Integer-weight neural nets , 1994 .