Overview of the CLEF Dynamic Search Evaluation Lab 2018
暂无分享,去创建一个
[1] Kallirroi Georgila,et al. User simulation for spoken dialogue systems: learning and evaluation , 2006, INTERSPEECH.
[2] Grace Hui Yang,et al. The water filling model and the cube test: multi-dimensional evaluation for professional search , 2013, CIKM.
[3] Gary Geunbae Lee,et al. Data-driven user simulation for automated evaluation of spoken dialog systems , 2009, Comput. Speech Lang..
[4] David Maxwell,et al. Validating simulated interaction for retrieval evaluation , 2017, Information Retrieval Journal.
[5] Ben Carterette,et al. Evaluating multi-query sessions , 2011, SIGIR.
[6] M. de Rijke,et al. Pyndri: A Python Interface to the Indri Search Engine , 2017, ECIR.
[7] Helen F. Hastie,et al. A survey on metrics for the evaluation of user simulations , 2012, The Knowledge Engineering Review.
[8] Leif Azzopardi,et al. CLEF 2017 Dynamic Search Evaluation Lab Overview , 2017, CLEF.
[9] David Maxwell,et al. Agents, Simulated Users and Humans: An Analysis of Performance and Behaviour , 2016, CIKM.
[10] Lois M. L. Delcambre,et al. Discounted Cumulated Gain Based Evaluation of Multiple-Query IR Sessions , 2008, ECIR.
[11] Yiqun Liu,et al. Constructing click models for search users , 2017, Information Retrieval Journal.
[12] Ben Carterette,et al. Evaluating Retrieval over Sessions: The TREC Session Track 2011-2014 , 2016, SIGIR.
[13] James Allan,et al. HARD Track Overview in TREC 2004 (Notebook) High Accuracy Retrieval from Documents , 2004 .
[14] Yiming Yang,et al. Modeling Expected Utility of Multi-session Information Distillation , 2009, ICTIR.
[15] Ben Carterette,et al. Overview of the TREC 2015 Tasks Track , 2015, TREC.
[16] Joelle Pineau,et al. A Survey of Available Corpora for Building Data-Driven Dialogue Systems , 2015, Dialogue Discourse.
[17] Paul Over,et al. The TREC interactive track: an annotated bibliography , 2001, Inf. Process. Manag..
[18] Ben Carterette,et al. Overview of the TREC Tasks Track 2016 , 2016, TREC.