Speech synthesis with artificial neural networks

The use of artificial neural networks in speech synthesis is addressed. Symbols (graphemes) are converted into other symbols (phonemes). Neural networks are especially competitive for tasks where complex nonlinear transformations are needed, as well as for tasks where insufficient domain-specific knowledge is available. The conversion of phonetic transcription of a text into a number of speech parameters seems such a task. Some significant results of the authors' approach to train a neural network to perform this conversion are shown.<<ETX>>