Message from the Chairman

Despite the literacy problems associated with traditional English orthography (T.O.), many linguists have sought to justify it as a highly optimal system for English word families. They advocate curricula based on this morphographemic concept. In order to quantify the morphographemic optimality of T.O., i.e., the degree to which word families retain the base spelling, a simple algorithm was applied to the derived and inflected forms of 100 bases. A relative optimality percentage was determined for each form, each family, and the corpus as a whole. Simultaneously, T.O., which was determined to be 95 percent optimal, was compared with a more phonemically reliable orthography, which was found to have a higher (97 percent) basic optimality. Finally, for purposes of determining the gradated difficulty of subject matter, the word families were ranked according to their optimality. Introduction It is...noteworthy but not too surprising that English orthography, despite its often-cited inconsistencies, comes remarkably close to being an optimal orthographic system for English. (Chomsky & Halle, 1968, p.49) Problem How close is remarkably close? What would an optimal orthographic system for English look like? In order to answer these questions, especially as they relate to the teaching of English, consider what this influential aside from The Sound Pattern of English assumes. The authors presuppose at least a perceived problem with traditional English orthography (T.O.). Otherwise, Chomsky and Halle would not consider the noteworthy optimality of T.O. to be noteworthy. If T.O. were obviously optimal, it would not sometimes be called a serious "obstacle to literacy acquisition" (Carney, 1995, p.xvi). Studies about the difficulties for writer and reader abound. According to Carney, Such a view has been [often] stated. Ever since English spelling settled down in the seventeenth and eighteenth centuries, the consensus seems to have been that the conventions we have inherited are ill-suited...yet well-educated natives seem to cope with [T.O.], though only after a heavy investment of time and effort. (p.xviii) Anecdotes of variability beg the question: just what is orthographic optimality? Chomsky and Halle state that an ideal orthography has one representation for each lexical entry (p.49). Others suggest that an optimal orthography uses one grapheme (i.e., letter) to signify one phoneme (i.e., a sound that distinguishes one word from another). The difference between these criteria reflects, to some extent, an emphasis on reading on one hand and a writing emphasis on the other. In short, definitions of optimal orthography differ, let alone how T.O. measures up. Background literature A benchmark for the optimal spelling of English is available in Eastern Europe, where we find an active orthographic continuum. The Russian spelling system, for example, cannot be read "by a purely sequential, phonic method: it requires a combination of the phonic and look-and-say methods" (Knowles, 1988). This is the morphemic end of the spectrum. It retains the integrity of morphemes (i.e., meaningful, minimal linguistic units, namely words) at the expense of one-to-one, sound-to-spelling, spelling-to-sound correspondences. The other end of the spectrum, characterized by near-100% phonemic integrity, is represented by the Serbo-Croatian orthography. In Serbo-Croatian, phonemes reign supreme: there is no such concept as the integrity of the morpheme (Knowles). Between the Serbo-Croatian and Russian orthographies lies Byelorussian. Rather than maintaining morphemic integrity, this system partly overrides morphemes with assistance from a system that spells according to pronunciation. For instance, <o> is pronounced /o/ until a stress shift renders a pronunciation of /a/; then the spelling also shifts to <a>. Yet Byelorussian has adopted this principle only for vowels, not consonants. Knowles reports claims that this alphabetic system has helped improve literacy in Byelorussia). He concludes: In the Slavonic languages a spectrum of spelling systems exists, from the predominantly morphophonemic (Russian) to the predominantly phonemic (Serbo-Croat); there is no representative of the English 'antisystem'!. The optimality of this so-called English 'antisystem' can be systematically analyzed using theoretical assumptions underlying any point along this orthographic spectrum. Perhaps the bestknown systematic analysis of any kind was performed by Hanna, Hanna, Hodes and Rudorf (1966). In order to determine how closely T.O. approximates the alphabetic principle, these Stanford University linguists incorporated a linguistically based research design into a computer program, thru which they fed 17,000 different words. Their work, published as Phoneme Grapheme Correspondences as Cues to Spelling Improvement, began with the sound of the words as represented by phonetic respellings. Then, by devising rules, they attempted to spell those words correctly. To summarize, they found that 90 percent of the correspondences the program found between phonemes and graphemes were correct. However, fewer than 50 percent of the words they analyzed could be spelled correctly on the basis of phonological principles. Nevertheless, Carney states, while the 50 percent figure suffers from underand overstatement, "this 50 percent success rate of correctly spelt words is probably too generous for the rules as they stand" (p.94). Despite 308 rules and 88 exception (i.e., set-aside) words, this analysis suggests that T.O. is 50% optimal on a phonemic sound-to-letter basis. Hanna, et al., admit that, when other phonological factors are not taken into consideration, T.O.'s phoneme-grapheme relationships only inconclusively approximate the alphabetic principle (p.39). More recent research, with an eye toward speech synthesis, has emphasized the spelling-to-sound optimality of T.O. Ainsworth's algorithm (1973) stands out among those devised to account for English spelling with basic correspondence rules. Just as success for Hanna, et al, is correct spelling, success for Ainsworth's algorithm is the intelligibility of the synthesized speech output (Carney, p.260). Ainsworth has no set-aside table of irregular words and uses 159 correspondence rules — although a quarter of these rules have to do with single words or morphemes. While Carney cautions that such an algorithm cannot be quoted as an unqualified index of the optimality of T.O., Ainsworth's results are suggestive: Listeners judged the comprehensibility of the synthetic speech output. The best results came from the more experienced listeners who were used to... synthetic speech. The best of these identified 90 percent of synthesized words correctly; the poorest listeners could only manage 50 percent (Carney, p.266). In other words, Ainsworth made 50 to 90 percent of words in a text identifiable using an algorithm of 159 correspondence rules. Therefore, in terms of one-to-one, spelling-to-sound correspondences, Ainsworth's results suggest an optimality of approximately 70 percent, with a practical margin of error of plus or minus 20 percent. In terms of basic one-to-one correspondences, then, if one were to average the success rates and, thus, the phonemic optimality results of Hanna, et al, and Ainsworth, then the optimality average of 50 and 70 percent, or 60 percent, could be an approximation. Both analyses are based on surface or self-evident phonemic principles. Beneath the surface, however, are morphophonemic patterns, which have been explored by researchers since the 1960s. Venezky (1967), who defines T.O. as a phonemically based system that maintains morphemic identity whenever possible, provides word pairs as evidence of these patterns: labor/laborious, rigor/rigorous, and curious/curiosity — altho curiosity fails to maintain the morphemic identity of its base form (curious). McDonald (1970) suggests "it is more valuable to have an orthography which protects the obvious visual similarity in word families than one which obliterates such relationships in favor of broad phonetic accuracy" (p.325). "Making efficient reading easier" is the target of widely cited morphophonemic pedagogist C. Chomsky (1970, p.292), who advocates the close correspondence of T.O. and underlying abstract forms rather than their phonetic realizations. While she may be faulted for not seeking to make all forms of reading easier, her word pair samples such as nation/national and courage/courageous appear to make efficient reading easier by "permitting immediate direct identification of the lexical item, without requiring the reader to abstract away from irrelevant phonetic information" (p.289). Yet other orthographers counter that, tho these morphophonemic theories are valid on their face, a lack of reader cognitive awareness of these patterns may make the issue moot. Indeed, Chomsky expresses concern when she asks: "Does [this abstract lexical representation] have a psychological reality for language users, [i.e.,] is it based on something a reader can honestly be said to know?" (p.295). Her own reply — "it seems to me [that it does]" — is hardly persuasive, betraying a lack of available hard evidence in 1970. Among the first to note specific flaws in morphographemic theory were Simon and Simon (1973), who argue that there are too few word pairs of this type to be useful and that such analogies will often lead to misspellings (e.g., remember-remembrerance; proceed-proceedure)" (cited in Marsh, Friedman, Welch & Desberg, 1980, p.353). Frith (1980) points out that, tho learners do use such analogies and rules when spelling novel words, linguistic rules are complex and of a large and unknown number [and often] known by hindsight only. For instance, one could theoretically know how to spell nation (rather than nashen) because of the morphological relationship to native; on the other hand, one probably only knows of the relationship because one can spell