Wanted : Element Retrieval Users

Document centric information retrieval is used every day by people all over the world. It is an application well studied, well understood, and of which there is a sound user model. Element retrieval, on the other hand, is a new field of research, with no identified applications, no users, and without a user model. Some of the methodological issues in element retrieval are identified. The standard document collection (the INEX / IEEE collection) is shown to be unsuitable for element retrieval, and the question is raised – does such a suitable collection exist? Some characteristics of querying behavior are identified, and the question raised – will users ever use structural hints in their queries? Examining the judgments and metrics, it is shown that the judgments are inconsistent and the metrics do not measure the same things. It is suggested that identifying an application of element retrieval could resolve some of these issues. Aspects of the application could (and should) be modeled, resulting is a more sound field of element retrieval. Alternatively, whatever it is, users don’t want it, judges can’t judge it, and the metrics can’t measure it.

[1]  Linda Schamber Relevance and Information Behavior. , 1994 .

[2]  Stephen P. Harter,et al.  Variations in Relevance Assessments and the Measurement of Retrieval Effectiveness , 1996, J. Am. Soc. Inf. Sci..

[3]  Maarten de Rijke,et al.  Length normalization in XML retrieval , 2004, SIGIR '04.

[4]  N. Fuhr PAN-Uncovering Plagiarism , Authorship , and Social Software Misuse ImageCLEF 2013-Cross Language Image Annotation and Retrieval INEX-INitiative for the Evaluation of XML retrieval , 2002 .

[5]  Andrew Trotman,et al.  Learning to Rank , 2005, Information Retrieval.

[6]  Börkur Sigurbjörnsson Focused information retrieval from semi-structured documents , 2005, SIGIR 2005.

[7]  Jaana Kekäläinen,et al.  TRIX 2004 - Struggling with the Overlap , 2004, INEX.

[8]  Donna K. Harman,et al.  Overview of the first TREC conference , 1993, SIGIR.

[9]  W. John Wilbur A comparison of group and individual performance among subject experts and untrained workers at the document retrieval task , 1998 .

[10]  Ellen M. Voorhees Variations in relevance judgments and the measurement of retrieval effectiveness , 2000, Inf. Process. Manag..

[11]  Gabriella Kazai,et al.  The overlap problem in content-oriented XML retrieval evaluation , 2004, SIGIR '04.

[12]  Andrew Trotman,et al.  NEXI, Now and Next , 2004, INEX.

[13]  Richard A. O'Keefe If INEX Is the Answer, What Is the Question? , 2004, INEX.

[14]  Andrew Trotman,et al.  Narrowed Extended XPath I (NEXI) , 2004, INEX.

[15]  Seyed M. M. Tahaghoghi,et al.  Hybrid XML Retrieval Revisited , 2004, INEX.

[16]  Birger Larsen,et al.  The Interactive Track at INEX 2004 , 2004, INEX.

[17]  Shlomo Geva,et al.  NLPX at INEX 2004 , 2004, INEX.

[18]  Charles L. A. Clarke,et al.  Passage-Based Refinement (MultiText Experiements for TREC-6) , 1997, TREC.

[19]  Andrew Trotman,et al.  The Simplest Query Language That Could Possibly Work , 2004 .

[20]  Mounia Lalmas,et al.  Overview of INEX 2004 , 2004, INEX.

[21]  Heesop Kim,et al.  Interactive Searching Behavior with Structured XML Documents , 2004, INEX.

[22]  Gabriella Kazai,et al.  Report of the INEX 2003 metrics working group , 2014 .

[23]  Andrew Trotman,et al.  INEX 2005 guidelines for topic development , 2005 .

[24]  Gabriella Kazai,et al.  Reliability Tests for the XCG and inex-2002 Metrics , 2004, INEX.

[25]  Gabriella Kazai,et al.  Evaluating the effectiveness of content-oriented XML retrieval , 2003 .

[26]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[27]  Justin Zobel,et al.  Passage retrieval revisited , 1997, SIGIR '97.