Spelling issues tend to create relatively minor (though still complex) problems for corpus linguistics, information retrieval and natural language processing tasks that use A¢â‚¬EœstandardA¢â‚¬â„¢ or modern varieties of English. For example, in corpus annotation, we have to decide how to deal with tokenisation issues such as whether (i) periods represent sentence boundaries or acronyms and (ii) apostrophes represent quote marks or contractions (Grefenstette and Tapanainen, 1994; Grefenstette, 1999). The issue of spelling variation becomes more problematic when utilising corpus linguistic techniques on non-standard varieties of English, not least because variation can be due to differences in spelling habits, transcription or compositing practices, and morpho-syntactic customs, as well as A¢â‚¬A“misspellingA¢â‚¬Â. Examples of non-standard varieties include:
A¢â‚¬Â¢ Scottish English1 (Anderson et al., forthcoming), and dialects such as Tyneside English2 (Allen et al., forthcoming)
A¢â‚¬Â¢ Early Modern English (Archer and Rayson, 2004; Culpeper and KytAƒÂ¶, 2005)
A¢â‚¬Â¢ Emerging varieties such as SMS or CMC in weblogs (Ooi et al., 2006)
In the Dagstuhl workshop we focussed on historical corpora. Vast quantities of searchable historical material are being created in electronic form through large digitisation initiatives already underway e.g. Open Content Alliance3, Google Book Search4, and Early English Books Online5. Annotation, typically at the part-of-speech (POS) level, is carried out on modern corpora for linguistic analysis, information retrieval and natural language processing tasks such as named entity extraction. Increasingly researchers wish to carry out similar tasks on historical data (Nissim et al, 2004). However, historical data is considered noisy for tasks such as this. The problems faced when applying corpus annotation tools trained on modern language data to historical texts are the motivation for the research described in this paper.
Previous research has adopted an approach of adding historical variants to the POS tagger lexicon, for example in TreeTagger annotation of GerManC (Durrell et al, 2006), or A¢â‚¬A“back-datingA¢â‚¬Â the lexicon in the Constraint Grammar Parser of English (ENGCG) when annotating the Helsinki corpus (KytAƒÂ¶ and Voutilainen, 1995).
Our aim was to develop an historical semantic tagger in order to facilitate similar studies on historical data to those that we had previously been performing on modern data using the USAS semantic analysis system (Rayson et al, 2004). The USAS tool relies on POS tagging as a prerequisite to carrying out semantic disambiguation. Hence we were faced with the task of retraining or back-dating two tools, a POS tagger and a semantic tagger. Our proposed solution incorporates a corpus pre-processor for detecting historical spelling variants and inserting modern equivalents alongside them. This enables retrieval as well as annotation tasks and to some extent avoids the need to retrain each annotation tool that is applied to the corpus. The modern tools can then be applied to the modern spelling equivalents rather than the historical variants, and thereby achieve higher levels of accuracy.
The resulting variant detector tool (VARD) employs a number of techniques derived from spell-checking tools as we wished to evaluate their applicability to historical data. The current version of the tool uses known-variant lists, SoundEx, edit distance and letter replacement heuristics to match Early Modern English variants with modern forms. The techniques are combined using a scoring mechanism to enable preferred candidates to be selected using likelihood values. The current known-variant lists and letter replacement rules are manually created. In a cross-language study with English and German texts we found that similar techniques could be used to derive letter replacement heuristics from corpus examples (Pilz et al, forthcoming). Our experiments show that VARD can successfully deal with:
A¢â‚¬Â¢ Apostrophes signalling missing letter(s) or sound(s): A¢â‚¬Eœfore (A¢â‚¬A“beforeA¢â‚¬Â), heeA¢â‚¬â„¢l (A¢â‚¬A“he willA¢â‚¬Â),
A¢â‚¬Â¢ Irregular apostrophe usage: againA¢â‚¬â„¢st (A¢â‚¬A“againstA¢â‚¬Â), whilA¢â‚¬â„¢st (A¢â‚¬A“whilstA¢â‚¬Â)
A¢â‚¬Â¢ Contracted forms: A¢â‚¬Eœtis(A¢â‚¬A“it isA¢â‚¬Â), thats (A¢â‚¬A“that isA¢â‚¬Â), youle (A¢â‚¬A“you willA¢â‚¬Â), tA¢â‚¬â„¢anticipate (A¢â‚¬A“to anticipateA¢â‚¬Â)
A¢â‚¬Â¢ Hyphenated forms: acquain-tance (A¢â‚¬A“acquaintanceA¢â‚¬Â)
A¢â‚¬Â¢ Variation due to different use of graphs: , , , : aboue (A¢â‚¬A“aboveA¢â‚¬Â), abyde (A¢â‚¬A“abideA¢â‚¬Â)
A¢â‚¬Â¢ Doubling of vowels and consonants A¢â‚¬â€œe.g. : triviall (A¢â‚¬A“trivialA¢â‚¬Â)
By direct comparison, variants that are not in the modern lexicon are easy to identify, however, our studies show that a significant portion of variants cannot be discovered this way. Inconsistencies in the use of the genitive, and A¢â‚¬EœthenA¢â‚¬â„¢ appearing instead of A¢â‚¬EœthanA¢â‚¬â„¢ or vice versa require contextual information to be used in their detection. We will outline our approach to resolving this problem, by the use of contextually-sensitive template rules that contain lexical, grammatical and semantic information.
Footnotes
1 http://www.scottishcorpus.ac.uk/
2 http://www.ncl.ac.uk/necte/
3 http://www.opencontentalliance.org/
4 http://books.google.com/
5 http://eebo.chadwyck.com/home
[1]
Dawn Archer,et al.
The Identification of Spelling Variants in English and German Historical Texts: Manual or Automatic?
,
2008,
Lit. Linguistic Comput..
[2]
Dawn Archer,et al.
Using an historical semantic tagger as a diagnostic tool for variation in spelling
,
2004
.
[3]
Malvina Nissim,et al.
Recognising Geographical Entities in Scottish Historical Documents
,
2003
.
[4]
Anthony McEnery,et al.
The UCREL Semantic Analysis System
,
2004
.
[5]
Merja Kytö.
Applying the Constraint Grammar Parser of English to the Helsinki Corpus1
,
1997
.
[6]
Pasi Tapanainen,et al.
What is a word, What is a sentence? Problems of Tokenization
,
1994
.