Distributed normalisation input coding to speed up training process of BP-neural network classifier

A coding method, distributed normalisation, is presented to speed up the training process of a back-propagation neural network classifier. In contrast to one-node normalisation coding, the values of the feature variables are distributed over a number of input nodes to increase the representation range of certain parts of each feature variable. A distinct advantage of this coding method is its ability to maintain the generalisation capability of one-node normalisation coding.