Challenges on Combining Open Web and Dataset Evaluation Results: The Case of the Contextual Suggestion Track
暂无分享,去创建一个
The TREC 2013 Contextual Suggestion Track allowed participants to submit personalised rankings using documents either from the OpenWeb or from an archived, static Web collection, the ClueWeb12 dataset. We argue that this setting poses problems in how the performance of the participants should be compared. We analyse biases found in the process, both objective and subjective, and discuss these issues in the general framework of evaluating personalised Information Retrieval using dynamic against static datasets.
[1] Charles L. A. Clarke,et al. Overview of the TREC 2012 Contextual Suggestion Track , 2013, TREC.
[2] Charles L. A. Clarke,et al. Evaluating Contextual Suggestion , 2013, EVIA@NTCIR.
[3] Diane Kelly,et al. Methods for Evaluating Interactive Information Retrieval Systems with Users , 2009, Found. Trends Inf. Retr..
[4] T. Minka. Selection bias in the LETOR datasets , 2008 .