Overview of CENTRE@CLEF 2019: Sequel in the Systematic Reproducibility Realm

Reproducibility has become increasingly important for many research areas, among those IR is not an exception and has started to be concerned with reproducibility and its impact on research results. This paper describes our second attempt to propose a lab on reproducibility named CENTRE, held during CLEF 2019. The aim of CENTRE is to run both a replicability and reproducibility challenge across all the major IR evaluation campaigns and to provide the IR community with a venue where previous research results can be explored and discussed. This paper reports the participant results and preliminary considerations on the second edition of CENTRE@CLEF 2019.

[1]  Linda Cappellato,et al.  CLEF 2018 Working Notes , 2018, CLEF 2018.

[2]  Norbert Fuhr,et al.  Some Common Mistakes In IR Evaluation, And How They Can Be Avoided , 2018, SIGIR Forum.

[3]  J. Shane Culpepper,et al.  RMIT at the 2018 TREC CORE Track , 2017, TREC.

[4]  Mark Sanderson,et al.  Examining Additivity and Weak Baselines , 2016, ACM Trans. Inf. Syst..

[5]  Nicola Ferro,et al.  SIGIR Initiative to Implement ACM Artifact Review and Badging , 2018, SIGF.

[6]  Tetsuya Sakai,et al.  CENTRE@CLEF2018: Overview of the Replicability Task , 2018, CLEF.

[7]  Alistair Moffat,et al.  Has adhoc retrieval improved since 1994? , 2009, SIGIR.

[8]  Jimmy J. Lin,et al.  Overview of the 2019 Open-Source IR Replicability Challenge (OSIRRC 2019) , 2019, OSIRRC@SIGIR.

[9]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[10]  Juliana Freire,et al.  Reproducibility of Data-Oriented Experiments in e-Science (Dagstuhl Seminar 16041) , 2016, Dagstuhl Reports.

[11]  Emine Yilmaz,et al.  Research Frontiers in Information Retrieval Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018) , 2018 .

[12]  C. F. Kossack,et al.  Rank Correlation Methods , 1949 .

[13]  Tsvi Kuflik,et al.  The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction , 2018, SIGF.

[14]  Andrew Trotman,et al.  Report on the SIGIR 2015 Workshop on Reproducibility, Inexplicability, and Generalizability of Results (RIGOR) , 2016, SIGF.

[15]  Tetsuya Sakai,et al.  Overview of CENTRE@CLEF 2018: A First Tale in the Systematic Reproducibility Realm , 2018, CLEF.

[16]  Noriko Kando,et al.  Increasing Reproducibility in IR: Findings from the Dagstuhl Seminar on "Reproducibility of Data-Oriented Experiments in e-Science" , 2016, SIGIR Forum.

[17]  Alistair Moffat,et al.  Principles for robust evaluation infrastructure , 2011, DESIRE '11.

[18]  Ronald W. Shephard,et al.  Mathematics of Statistics, Part One. , 1948 .

[19]  Jimmy J. Lin,et al.  The SIGIR 2019 Open-Source IR Replicability Challenge (OSIRRC 2019) , 2019, SIGIR.

[20]  Craig MacDonald,et al.  Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge , 2016, ECIR.

[21]  James Allan,et al.  TREC 2017 Common Core Track Overview , 2017, TREC.

[22]  Fernando Diaz,et al.  Research Frontiers in Information Retrieval: Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018) , 2018, SIGF.

[23]  Allan Hanbury,et al.  Replicating an Experiment in Cross-lingual Information Retrieval with Explicit Semantic Analysis , 2018, CLEF.

[24]  Tetsuya Sakai,et al.  CENTRE@CLEF2019: Overview of the Replicability and Reproducibility Tasks , 2019, CLEF.

[25]  Alistair Moffat,et al.  Improvements that don't add up: ad-hoc retrieval results since 1998 , 2009, CIKM.

[26]  Maura R. Grossman,et al.  MRG_UWaterloo and WaterlooCormack Participation in the TREC 2017 Common Core Track , 2017, TREC.

[27]  Timo Breuer,et al.  Replicability and Reproducibility of Automatic Routing Runs , 2019, CLEF.