PROMISE retreat report prospects and opportunities for information access evaluation
暂无分享,去创建一个
Martin Braschler | Paul Buitelaar | Maarten de Rijke | Giorgio Maria Di Nunzio | Giuseppe Santucci | Elaine Toms | Nicola Ferro | Allan Hanbury | Khalid Choukri | Henning Müller | Preben Hansen | Gianmaria Silvello | Vivien Petras | Birger Larsen | Toine Bogers | Maristella Agosti | Anni Järvelin | Pamela Forner | Richard Berendsen | Mihai Lupu | Florina Piroi | Karin Friberg | Simone Peruzzo | Ivano Masiero
[1] Djoerd Hiemstra,et al. Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics , 2012, Lecture Notes in Computer Science.
[2] Nicola Ferro,et al. 6 – Towards an infrastructure for digital library performance evaluation , 2009 .
[3] Rob W.W. Hooft,et al. The value of data , 2011, Nature Genetics.
[4] Allan Hanbury,et al. Bringing the Algorithms to the Data: Cloud-Based Benchmarking for Medical Image Analysis , 2012, CLEF.
[5] Ian J. Taylor,et al. Workflows and e-Science: An overview of workflow system features and capabilities , 2009, Future Gener. Comput. Syst..
[6] Matthew Lease,et al. Crowdsourcing for search evaluation , 2011, SIGF.
[7] Nicola Ferro,et al. DIRECTions: Design and Specification of an IR Evaluation Infrastructure , 2012, CLEF.
[8] Pamela Forner. Multilingual and Multimodal Information Access Evaluation - Second International Conference of the Cross-Language Evaluation Forum, CLEF 2011, Amsterdam, The Netherlands, September 19-22, 2011. Proceedings , 2011, CLEF.
[9] James C. Spohrer,et al. Editorial Column—Welcome to Our Declaration of Interdependence , 2009 .
[10] Giuseppe Santucci,et al. Harnessing the Scientific Data Produced by the Experimental Evaluation of Search Engines and Information Access Systems Improved Exploitation of Measures and Analyses in Scientic Production Info Rma Tion , 2022 .
[11] Diane Kelly,et al. Methods for Evaluating Interactive Information Retrieval Systems with Users , 2009, Found. Trends Inf. Retr..
[12] Pearl Brereton,et al. Lessons from applying the systematic literature review process within the software engineering domain , 2007, J. Syst. Softw..
[13] José Luis Vicedo González,et al. TREC: Experiment and evaluation in information retrieval , 2007, J. Assoc. Inf. Sci. Technol..
[14] Preben Hansen,et al. Collaborative Information Retrieval in an information-intensive domain , 2005, Inf. Process. Manag..
[15] Henning Müller,et al. Assessing the Scholarly Impact of ImageCLEF , 2011, CLEF.
[16] Alistair Moffat,et al. Improvements that don't add up: ad-hoc retrieval results since 1998 , 2009, CIKM.
[17] Marcel Worring,et al. The challenge problem for automated detection of 101 semantic concepts in multimedia , 2006, MM '06.
[18] Katja Hofmann,et al. Validating Query Simulators: An Experiment Using Commercial Searches and Purchases , 2010, CLEF.
[19] Alan F. Smeaton,et al. The scholarly impact of TRECVid (2003-2009) , 2011, J. Assoc. Inf. Sci. Technol..
[20] Daniel A. Keim,et al. Visual analytics of anomaly detection in large data streams , 2009, Electronic Imaging.
[21] Henning Müller,et al. Ground truth generation in medical imaging: a crowdsourcing-based iterative approach , 2012, CrowdMM '12.
[22] Fabian Steeg,et al. Information-Retrieval: Evaluation , 2010 .
[23] James Allan,et al. Frontiers, Challenges, and Opportunities for Information Retrieval , 2012 .
[24] Paul Over,et al. Evaluation campaigns and TRECVid , 2006, MIR '06.
[25] Mladen A. Vouk,et al. Cloud computing — Issues, research and implementations , 2008, ITI 2008 - 30th International Conference on Information Technology Interfaces.
[26] M. de Rijke,et al. Building simulated queries for known-item topics: an analysis using six european languages , 2007, SIGIR.
[27] Ellen M. Voorhees,et al. TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing) , 2005 .
[28] Albert N. Link,et al. Economic impact assessment of NIST's text REtrieval conference (TREC) program. Final report , 2010 .
[29] Abdur Chowdhury,et al. Using titles and category names from editor-driven taxonomies for automatic evaluation , 2003, CIKM '03.
[30] Stephen Robertson,et al. The methodology of information retrieval experiment , 1981 .
[31] C. Cleverdon. Report on the testing and analysis of an investigation into comparative efficiency of indexing systems , 1962 .
[32] M. de Rijke,et al. Generating Pseudo Test Collections for Learning to Rank Scientific Articles , 2012, CLEF.
[33] Cyril Cleverdon,et al. The Cranfield tests on index language devices , 1997 .
[34] Bill Hefley,et al. Service Science, Management and Engineering: Education for the 21st Century , 2008 .
[35] Matthew O. Ward,et al. Visual Exploration of Stream Pattern Changes Using a Data-Driven Framework , 2010, ISVC.
[36] Giuseppe Santucci,et al. Visual interactive failure analysis: supporting users in information retrieval evaluation , 2012, IIR.
[38] A. M. Cox. Evaluation of Digital Libraries: An Insight into Useful Applications and Methods , 2010, Program.
[39] Jimmy J. Lin,et al. Pseudo test collections for learning web search ranking functions , 2011, SIGIR.
[40] Cyril W. Cleverdon,et al. Aslib Cranfield research project: report on the testing and analysis of an investigation into the comparative efficiency of indexing systems , 1962 .
[41] Allan Hanbury,et al. Automated Component-Level Evaluation: Present and Future , 2010, CLEF.
[42] Kalervo Järvelin,et al. Information interaction in molecular medicine: integrated use of multiple channels , 2010, IIiX.
[43] Giorgio Maria Di Nunzio,et al. The Importance of Scientific Data Curation for Evaluation Campaigns , 2007, DELOS.
[44] James Allan,et al. Frontiers, challenges, and opportunities for information retrieval: Report from SWIRL 2012 the second strategic workshop on information retrieval in Lorne , 2012, SIGF.
[45] Giorgio Maria Di Nunzio,et al. A Proposal to Extend and Enrich the Scientific Data Curation of Evaluation Campaigns , 2007, EVIA@NTCIR.
[46] Stephen E. Robertson,et al. On the history of evaluation in IR , 2008, J. Inf. Sci..
[47] Nicola Ferro. DIRECT: the First Prototype of the PROMISE Evaluation Infrastructure for Information Retrieval Experimental Evaluation , 2011, ERCIM News.
[48] Nicola Ferro,et al. DESIRE 2011: workshop on data infrastructurEs for supporting information retrieval evaluation , 2012, SIGF.
[49] Mark Sanderson,et al. Test Collection Based Evaluation of Information Retrieval Systems , 2010, Found. Trends Inf. Retr..
[50] Sophia Ananiadou,et al. Text mining meets workflow: linking U-Compare with Taverna. , 2010, Bioinformatics.
[51] Ewa Deelman,et al. Scientific workflows and clouds , 2010, ACM Crossroads.
[52] Matthew Lease,et al. Crowdsourcing for information retrieval , 2012, SIGF.
[53] Nicola Ferro,et al. DESIRE 2011: first international workshop on data infrastructures for supporting information retrieval evaluation , 2011, CIKM '11.
[54] Martin Braschler,et al. A PROMISE for Experimental Evaluation , 2010, CLEF.
[55] Giuseppe Santucci,et al. Collecting and assessing collaborative requirements , 2011 .
[56] Maximilian Eibl,et al. A Large-Scale System Evaluation on Component-Level , 2011, ECIR.
[57] Gobinda G. Chowdhury,et al. TREC: Experiment and Evaluation in Information Retrieval , 2007 .