Data-scaling problems in neural-network training

Abstract In the present paper, data-scaling problems in feedforward neural-network training are discussed. These problems appear when the experimental data to be learned vary across a wide interval, and when, after the data has been scaled, a part of the information in the data is lost. To solve these problems, a parametric output function of the neurons is proposed here. It allows the data-scaling region to be increased by the introduction of two new parameters. During the process of backpropagation learning, the relative square error is minimized. In this way, the loss of information is avoided, since the modified neural network can be trained to account equally for the biggest and the smallest values in the training data set. Two examples of neural-network models of biotechnological processes are presented. A comparison with the classical feedforward neural-network models is made. Different approaches used in training with the new parameters are discussed.