Learning pronunciation with the Visual ear

We recently reported the use of Kohonen's feature map as the hidden layer of an RBF network for the recognition of spoken letters [1], and the analysis of sleep EEG [2]. The feature map was shown to act as an aid to visualization during the initial period of unsupervised learning in the hidden layer. In this paper, we again explore the topology preserving properties of Kohonen's feature map, this time for the visual interpretation of speech. It is shown that speech sounds, such as words or phonemes, may be displayed as moving trajectories on a computer screen and enhanced for ease of interpretation. A system known as the Visual Ear is introduced, in which speech from a normal speaker is displayed alongside that of a pupil learning pronunciation, enabling a visual comparison to be made between the two. The application of the Visual Ear to accelerated learning of foreign languages, or as a general speech therapy tool, are then discussed, and the limitations of the present system are highlighted.