Effects of Rank and Precision of Search Results on Users ’ Evaluations of System Performance
暂无分享,去创建一个
Chirag Shah | Diane Kelly | D. Kelly | C. Shah | Manning Hall | Xin Fu | M. Hall | Xin Fu
[1] Pia Borlund,et al. The IIR evaluation model: a framework for evaluation of interactive information retrieval systems , 2003, Inf. Res..
[2] David Hawking,et al. Evaluation by comparing result sets in context , 2006, CIKM '06.
[3] Elaine Toms,et al. WiIRE: the Web interactive information retrieval experimentation system prototype , 2004, Inf. Process. Manag..
[4] Louise T. Su. A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates , 2003, J. Assoc. Inf. Sci. Technol..
[5] Andrew Turpin,et al. Why batch and user evaluations do not give the same results , 2001, SIGIR '01.
[6] Thorsten Joachims,et al. Accurately interpreting clickthrough data as implicit feedback , 2005, SIGIR '05.
[7] Amanda Spink,et al. A user-centered approach to evaluating human interaction with Web search engines: an exploratory study , 2002, Inf. Process. Manag..
[8] James Allan,et al. HARD Track Overview in TREC 2003: High Accuracy Retrieval from Documents , 2003, TREC.
[9] Andrew Turpin,et al. Do batch and user evaluations give the same results? , 2000, SIGIR '00.
[10] Falk Scholer,et al. User performance versus precision measures for simple search tasks , 2006, SIGIR.
[11] Amanda Spink,et al. Web Search: Public Searching of the Web , 2011, Information Science and Knowledge Management.
[12] James Allan,et al. When will information retrieval be "good enough"? , 2005, SIGIR '05.
[13] Nicholas J. Belkin,et al. The TREC Interactive Tracks: Putting the User into Search , 2005 .