Simple, proven approaches to text retrieval

This technical note describes straightforward techniques for document indexing and retrieval that have been solidly established through extensive testing and are easy to apply. They are useful for many different types of text material, are viable for very large files, and have the advantage that they do not require special skills or training for searching, but are easy for end users. The document and text retrieval methods described here have a sound theoretical basis, are well established by extensive testing, and the ideas involved are now implemented in some commercial retrieval systems. Testing in the last few years has, in particular, shown that the methods presented here work very well with full texts, not only title and abstracts, and with large files of texts containing three quarters of a million documents. These tests, the TREC Tests (see Harman 1993–1997; IPM on term weighting exploiting statistical information about term occurrences; on scoring for request-document matching, using these weights, to obtain a ranked search output; and on relevance feedback to modify request weights or term sets in iterative searching. The normal implementation is via an inverted file organisation using a term list with linked document identifiers, plus counting data, and pointers to the actual texts. The user’s request can be a word list, phrases, sentences or extended text. 1 Terms and matching Index terms are normally content words (but see section 6). In request processing, stop words (e.g. prepositions and conjunctions) are eliminated via a stop word list, and they are usually removed, for economy reasons, in inverted file construction. Terms are also generally stems (or roots) rather than full words, since this means that matches are not missed through trivial word variation, as with singular/plural forms. Stemming can be achieved most simply by the user truncating his request words, to match any inverted index words that include them; but it is a better strategy to truncate using a standard stemming algorithm and suffix list (see Porter 1980), which is nicer for the user and reduces the inverted term list. The request is taken as an unstructured list of terms. If the terms are unweighted, output could be ranked by the number of matching terms – i.e. for a request with 5 terms first by documents with all 5, then by documents with any 4, etc. However, performance may be improved considerably by giving a weight to each term (or each term-document combination). In this case, output is ranked by sum of weights (see below).