A real-time implementation of a multi-layer perceptron with automatic tuning of learning parameters

Abstract This paper deals with one of the main drawbacks of traditional multi-layer perceptron learning strategy, i.e. the slow rate of convergence during the learning phase. In order to mitigate the mentioned problem, the learning parameters of the Back-Propagation algorithm, the learning rate and the momentum, considered as characteristic parameters of a filter, are tuned during the network training in accordance with two introduced functions. Some examples showing the suitability of the proposed strategy are reported. The introduced modification of the Back-Propagation algorithm makes faster the multi-layer neural network learning.