GEOGRAPHIC IR SYSTEMS: REQUIREMENTS AND EVALUATION

Geographic information retrieval is a new and evolving domain. The development of GIR systems requires the analysis of requirements for such systems and, after systems are implemented their evaluation. This paper describes requirement analysis and evaluation for the SPIRIT system. Methods of user and system-centred evaluation are described and a methodology for building a document collection to facilitate derivation of measures of system performance, together with a new scheme for the evaluation of spatial and thematic relevance with respect to search results are introduced. The paper stresses the importance of developing approaches to evaluate GIR systems which holistically evaluate user interactions as well as measures of system performance. Introduction Geographic information retrieval (GIR) is a fast developing area “concerned with providing access to geo-referenced information sources” (Larson, 1996). In recent years information retrieval has become to a large extent synonymous with the retrieval of relevant documents from large collections of unstructured text-based documents stored on the web. In this context, we will define geographic information retrieval more narrowly than Larson, as the retrieval of geographically and thematically relevant documents in response to a query of the form (e.g. Castles, Scotland), where the spatial relationship may either implicitly imply containment, or explicitly be selected from a set of possible topological, proximity and directional options (e.g. inside, near, north of) and where the documents searched are those available on the web. Developments in GIR are driven both by academic enquiry and commercial interests, with for example Google having recently introduced a so-called “Local” search engine based on the integration of data provided by business directories and web documents (http://local.google.co.uk/).

[1]  Peter Willett,et al.  Readings in information retrieval , 1997 .

[2]  Cyril Cleverdon,et al.  The Cranfield tests on index language devices , 1997 .

[3]  Pia Borlund,et al.  The IIR evaluation model: a framework for evaluation of interactive information retrieval systems , 2003, Inf. Res..

[4]  Avi Arampatzis,et al.  Multi-Dimensional Scattered Ranking Methods for Geographic Information Retrieval* , 2005, GeoInformatica.

[5]  Alexander Dekhtyar,et al.  Information Retrieval , 2018, Lecture Notes in Computer Science.

[6]  Ray R. Larson,et al.  Geographic information retrieval and spatial browsing , 1996 .

[7]  Martin Raubal,et al.  An Affordance-Based Model of Place in GIS , 1999 .

[8]  Mark Sanderson,et al.  The CLEF 2004 Cross-Language Image Retrieval Track , 2004, CLEF.

[9]  Alia I. Abdelmoty,et al.  The SPIRIT Spatial Search Engine: Architecture, Ontologies and Spatial Indexing , 2004, GIScience.

[10]  Mark Sanderson,et al.  The SPIRIT collection: an overview of a large web collection , 2004, SIGF.

[11]  Gabriella Kazai,et al.  The overlap problem in content-oriented XML retrieval evaluation , 2004, SIGIR '04.

[12]  Tefko Saracevic,et al.  RELEVANCE: A review of and a framework for the thinking on the notion in information science , 1997, J. Am. Soc. Inf. Sci..

[13]  Paul Clough,et al.  Identifying imprecise regions for geographic information retrieval using the web , 2005 .

[14]  Jaana Kekäläinen,et al.  Using graded relevance assessments in IR evaluation , 2002, J. Assoc. Inf. Sci. Technol..

[15]  Austin Henderson,et al.  A development perspective on interface, design, and theory , 1991 .

[16]  Ellen M. Voorhees,et al.  Overview of TREC 2001 , 2001, TREC.