Evaluation with informational and navigational intents

Given an ambiguous or underspecified query, search result diversification aims at accomodating different user intents within a single "entry-point" result page. However, some intents are informational, for which many relevant pages may help, while others are navigational, for which only one web page is required. We propose new evaluation metrics for search result diversification that considers this distinction, as well as a simple method for comparing the intuitiveness of a given pair of metrics quantitatively. Our main experimental findings are: (a) In terms of discriminative power which reflects statistical reliability, the proposed metrics, DIN#-nDCG and P+Q#, are comparable to intent recall and D#-nDCG, and possibly superior to α-nDCG; (b) In terms of preference agreement with intent recall, P+Q# is superior to other diversity metrics and therefore may be the most intuitive as a metric that emphasises diversity; and (c) In terms of preference agreement with effective precision, DIN#-nDCG is superior to other diversity metrics and therefore may be the most intuitive as a metric that emphasises relevance. Moreover, DIN#-nDCG may be the most intuitive as a metric that considers both diversity and relevance. In addition, we demonstrate that the randomised Tukey's Honestly Significant Differences test that takes the entire set of available runs into account is substantially more conservative than the paired bootstrap test that only considers one run pair at a time, and therefore recommend the former approach for significance testing when a set of runs is available for evaluation.

[1]  Mark Sanderson,et al.  Test Collection Based Evaluation of Information Retrieval Systems , 2010, Found. Trends Inf. Retr..

[2]  Craig MacDonald,et al.  Intent-aware search result diversification , 2011, SIGIR.

[3]  Stephen E. Robertson,et al.  A new rank correlation coefficient for information retrieval , 2008, SIGIR '08.

[4]  Stephen E. Robertson,et al.  Extending average precision to graded relevance judgments , 2010, SIGIR.

[5]  Joemon M. Jose,et al.  A Query-Basis Approach to Parametrizing Novelty-Biased Cumulative Gain , 2011, ICTIR.

[6]  Charles L. A. Clarke,et al.  An Effectiveness Measure for Ambiguous and Underspecified Queries , 2009, ICTIR.

[7]  Amanda Spink,et al.  Determining the informational, navigational, and transactional intent of Web queries , 2008, Inf. Process. Manag..

[8]  Ji-Rong Wen,et al.  Multi-dimensional search result diversification , 2011, WSDM '11.

[9]  Thorsten Joachims,et al.  Dynamic ranked retrieval , 2011, WSDM '11.

[10]  James Allan,et al.  A comparison of statistical significance tests for information retrieval evaluation , 2007, CIKM '07.

[11]  Mark Sanderson,et al.  Do user preferences and evaluation measures line up? , 2010, SIGIR.

[12]  Tetsuya Sakai Bootstrap-Based Comparisons of IR Metrics for Finding One Relevant Document , 2006, AIRS.

[13]  Tetsuya Sakai RD-004 NTCIREVAL : A Generic Toolkit for Information Access Evaluation , 2011 .

[14]  Charles L. A. Clarke,et al.  A comparative analysis of cascade measures for novelty and diversity , 2011, WSDM '11.

[15]  Andrei Broder,et al.  A taxonomy of web search , 2002, SIGF.

[16]  Olivier Chapelle,et al.  Expected reciprocal rank for graded relevance , 2009, CIKM.

[17]  Sreenivas Gollapudi,et al.  Diversifying search results , 2009, WSDM '09.

[18]  Yiqun Liu,et al.  Overview of the NTCIR-9 INTENT Task , 2011, NTCIR.

[19]  Charles L. A. Clarke,et al.  Novelty and diversity in information retrieval evaluation , 2008, SIGIR '08.

[20]  Ben Carterette,et al.  Multiple testing in statistical analysis of systems-based information retrieval experiments , 2012, TOIS.

[21]  Krishna Bharat,et al.  Diversifying web search results , 2010, WWW '10.

[22]  Tetsuya Sakai,et al.  Evaluating evaluation metrics based on the bootstrap , 2006, SIGIR.

[23]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[24]  Tetsuya Sakai,et al.  Evaluating diversified search results using per-intent graded relevance , 2011, SIGIR.

[25]  Stephen E. Robertson,et al.  Modelling A User Population for Designing Information Retrieval Metrics , 2008, EVIA@NTCIR.