The Relevant in Context retrieval task is document or article retrieval with a twist, where not only the relevant articles should be retrieved but also the relevant information within each article (captured by a set of XML elements) should be correctly identified. Our main research question is: how to evaluate the Relevant in Context task? We propose a generalized average precision measure that meets two main requirements: i) the score reflects the ranked list of articles inherent in the result list, and at the same time ii) the score also reflects how well the retrieved information per article (i.e., the set of elements) corresponds to the relevant information. The resulting measure was used at INEX 2006.
[1]
Charles L. A. Clarke,et al.
INEX 2006 retrieval task and result submission specification
,
2006
.
[2]
Gabriella Kazai.
INitiative for the Evaluation of XML Retrieval
,
2009,
Encyclopedia of Database Systems.
[3]
Jaana Kekäläinen,et al.
Using graded relevance assessments in IR evaluation
,
2002,
J. Assoc. Inf. Sci. Technol..
[4]
Birger Larsen,et al.
Report on the INEX 2004 interactive track
,
2005,
SIGF.
[5]
Gabriella Kazai.
Initiative for the Evaluation of XML Retrieval
,
2009
.