Mathematical proofs for an improvement in neural learning are presented. Within an analytical and statistical framework, dependency of neural learning on the distribution characteristic of training set vectors is established for a function approximation problem. It is shown that the BP algorithm works well for a certain type of training set vector distribution and the degree of saturation can be reduced in the hidden layer, when this behaviour is exploited. A modification to the distribution characteristic of the input vectors through pre-processing in order to exploit the behaviour of the BP algorithm towards a particular input vector distribution characteristic is proposed in estimating the parameters of an articulatory speech synthesizer. The same concept of incorporating the speech signal distribution characteristic into the process has been used in speech coding techniques such as PCM for performance improvement.
[1]
Tom Tollenaere,et al.
SuperSAB: Fast adaptive back propagation with good scaling properties
,
1990,
Neural Networks.
[2]
Arjen van Ooyen,et al.
Improving the convergence of the back-propagation algorithm
,
1992,
Neural Networks.
[3]
Amir F. Atiya,et al.
An accelerated learning algorithm for multilayer perceptron networks
,
1994,
IEEE Trans. Neural Networks.
[4]
Robert A. Jacobs,et al.
Increased rates of convergence through learning rate adaptation
,
1987,
Neural Networks.
[5]
Russell Reed,et al.
Pruning algorithms-a survey
,
1993,
IEEE Trans. Neural Networks.