Exploiting the statistical characteristic of the speech signals for an improved neural learning in a MLP neural network

Mathematical proofs for an improvement in neural learning are presented. Within an analytical and statistical framework, dependency of neural learning on the distribution characteristic of training set vectors is established for a function approximation problem. It is shown that the BP algorithm works well for a certain type of training set vector distribution and the degree of saturation can be reduced in the hidden layer, when this behaviour is exploited. A modification to the distribution characteristic of the input vectors through pre-processing in order to exploit the behaviour of the BP algorithm towards a particular input vector distribution characteristic is proposed in estimating the parameters of an articulatory speech synthesizer. The same concept of incorporating the speech signal distribution characteristic into the process has been used in speech coding techniques such as PCM for performance improvement.