Simple Maths for Keywords

We present a simple method for identifying keywords of one corpus vs. another. There is no one-sizefits-all list, but different lists according to the frequency range the user is interested in. The method includes a variable which allows the user to focus on higher or lower frequency words. “This word is twice as common here as there.” Such observations are entirely central to corpus linguistics. We very often want to know which words are distinctive of one corpus, or text type, versus another. The simplest way to make the comparison is expressed in my opening sentence. “Twice as common” means the word’s frequency (per thousand words, or million words) in the one corpus is twice its frequency in the other. We count occurrences in each corpus, divide each number by the number of words in that corpus, optionally multiply by 1,000 or 1,000,000 to give frequencies per thousand or million, and divide the one number by the other to give a ratio. (Since the thousands or millions cancel out when we do the division, it makes no difference whether we use thousands or millions. In the below I will assume millions and will use wpm for “words per million”, as in other sciences which often use “parts per million”. ) It is often instructive to find the ratio for all words, and to sort words by the ratio to find the words that are most associated with each corpus as against the other. This will give a first pass at two “keywords” lists, one (taken from the top of the sorted list) of corpus1 vs corpus2, and the other, taken from the bottom of the list (with scores below 1 and getting close to 0), for corpus2 vs corpus1. (In the below I will refer to the two corpora as the focus corpus fc, for which we want to find keywords, and the reference corpus rc: we divide relative frequency in the focus corpus by relative frequency in the reference corpus and are interested in the high-scoring words.) There are four problems with keyword lists prepared in this way. 1. All corpora are different, usually in a multitude of ways. We probably want to examine a keyword list because of one particular dimension of difference between fc and rc – perhaps a difference of genre, or of region, or of domain. The list may well be dominated by other differences, which we are not at all interested in. Keyword lists tend to work best where the corpora are very well matched in all regards except the one in question. This is a question of how fc and rc have been prepared. It is often the greatest source of bewilderment when users see keyword lists (and makes keyword lists good tools for identifying the characteristics of corpora). However it is an issue of corpus construction, which is not the topic of this paper, so it is not discussed further. 2. “Burstiness”. If a word is the topic for one text in the corpus, it may well be used many times in that text with its frequency in that corpus mainly coming from just one text. Such “bursty” words do a poor job of representing the overall contrast between fc and rc. This, again, is not the topic of this paper. A range of solutions have been proposed, as reviewed by Gries (2007). The method we use in our experiments is “average reduced frequency” (ARF, Savický and Hlavacova 2002) which discounts frequency for words with bursty distributions: for a word with an even distribution across a corpus, ARF will be equal to raw frequency, but for a word with a very bursty distribution, only occurring in a single short text, ARF will be a little over 1. 3. You can’t divide by zero. It is not clear what to do about words which are present in fc but absent in rc. 4. Even setting aside the zero cases, the list will be dominated by words with very few words in the reference corpus: there is nothing very surprising about a contrast between 10 in fc and 1 in rc, giving a ratio of 10, and we expect to find many such cases, but we would be very surprised to find words with frequency per million of 10,000 in fc and only 1,000 in rc, even though that also gives a ratio of 10. Simple ratios will give a list of rarer words. The last problem has been the launching point for an extensive literature. The literature is shared with the literature on collocation statistics, since formally, the problems are similar: in both cases we compare the frequency of the keyword in condition 1 (which is either “in fc” or “with collocate x”) with frequency in condition 2 (“in rc” or “not with collocate x”). The literature starts with Church and Hanks (1989) and other much-cited references include Dunning (1993), Pederson. Proposed statistics include Mutual Information (MI), Log Likelihood and Fisher’s Exact Test, see Chapter X of Manning and Schutze (1999). I have argued elsewhere (Kilgarriff 2005) that the mathematical sophistication of MI, Log Likelihood and Fisher’s Exact Test is of no value to us, since all it serves to do is to disprove a null hypothesis that language is random which is patently untrue. Sophisticated maths needs a null hypothesis to build on and we have no null hypothesis: perhaps we can meet our needs with simple maths. A common solution to the zeros problem is “add one”. If we add one to all the frequencies, including those for words which were present in fc but absent in rc, then we have no zeros and can compute a ratio for all words. A word with 10 wpm in fc and none in rc gets a ratio of 11:1 (as we add 1 to 10 and 1 to 0) or 11. “Add one” is widely used as a solution to a range of problems associated with low and zero frequency counts, in language technology and elsewhere (Manning and Schutze 1999). “Add one” (to all counts) is the simplest variant: there are sometimes reasons for adding some other constant, or variable amount, to all frequencies. This suggests a solution to problem 4. Consider what happens when we add 1, 100, or 1000 to all counts-per-million from both corpora. The results, for the three words obscurish, middling and common, in two hypothetical corpora, are presented below: Add 1: word wpm in fc wpm in rc adjusted, for fc adjusted, for rc Ratio Rank obscurish 10 0 10+1=11 0+1=1 11.0 1 middling 200 100 200+1=201 100+1=101 1.99 2 common 1200