Integer-weight approximation of continuous-weight multilayer feedforward nets
暂无分享,去创建一个
Multilayer feedforward neural nets with integer weights can be used to approximate the response of their counterparts with continuous-weights. Integer weights, when restricted to a maximum magnitude of 3, require just 3 binary bits for storage, and therefore are very attractive for hardware implementation of neural nets. However, these integer-weight nets have a weaker learning capability and lack the affine group invariance of continuous-weight nets. These weaknesses, although compensatable by the addition of hidden neurons, can be used to one's benefit for closely matching the network complexity with that of the learning task. This paper discusses theses issues with the help of the decision and error surfaces of 2D classification problems of various complexities, whose results suggest that in many cases, limited weight resolution can be offset by an increase in the size of the hidden layer in the network.
[1] Rüdiger W. Brause. The error-bounded descriptional complexity of approximation networks , 1993, Neural Networks.
[2] Evor L. Hines,et al. Integer-weight neural nets , 1994 .
[3] H. John Caulfield,et al. Weight discretization paradigm for optical neural networks , 1990, Other Conferences.
[4] E. Fiesler,et al. A weight discretization paradigm for optical neural networks 0 , 1990 .