Integer-weight approximation of continuous-weight multilayer feedforward nets

Multilayer feedforward neural nets with integer weights can be used to approximate the response of their counterparts with continuous-weights. Integer weights, when restricted to a maximum magnitude of 3, require just 3 binary bits for storage, and therefore are very attractive for hardware implementation of neural nets. However, these integer-weight nets have a weaker learning capability and lack the affine group invariance of continuous-weight nets. These weaknesses, although compensatable by the addition of hidden neurons, can be used to one's benefit for closely matching the network complexity with that of the learning task. This paper discusses theses issues with the help of the decision and error surfaces of 2D classification problems of various complexities, whose results suggest that in many cases, limited weight resolution can be offset by an increase in the size of the hidden layer in the network.