Neural network models of reading multi-syllabic words

This paper presents the results of simulations of a new class of artificial neural network models of reading. Unlike previous models, they are not restricted to mono-syllabic words, require no complicated input-output representations such as Wickelfeatures and, although based on the NETtalk system of Sejnowski and Rosenberg (1987), require no pre-processing to align the letters and phonemes in the training data. The best cases are able to achieve 100% performance on the Seidenberg and McClelland (1989) training corpus, in excess of 90% on pronounceable nonwords and on damage exhibit symptoms similar to acquired surface dyslexia.