Encouraging Reproducibility in Scientific Research of the Internet (Dagstuhl Seminar 18412)

Reproducibility of research in Computer Science (CS) and in the field of networking in particular is a well-recognized problem. For several reasons, including the sensitive and/or proprietary nature of some Internet measurements, the networking research community pays limited attention to the of reproducibility of results, instead tending to accept papers that appear plausible. This article summarises a 2.5 day long Dagstuhl seminar on Encouraging Reproducibility in Scientific Research of the Internet held in October 2018. The seminar discussed challenges to improving reproducibility of scientific Internet research, and developed a set of recommendations that we as a community can undertake to initiate a cultural change toward reproducibility of our work. It brought together people both from academia and industry to set expectations and formulate concrete recommendations for reproducible research. This iteration of the seminar was scoped to computer networking research, although the outcomes are likely relevant for a broader audience from multiple interdisciplinary fields. Seminar October 7–10, 2018 – http://www.dagstuhl.de/18412 2012 ACM Subject Classification Networks

[1]  Nick McKeown,et al.  Learning Networking by Reproducing Research Results , 2017, CCRV.

[2]  Balachander Krishnamurthy,et al.  A Socratic method for validation of measurement-based networking research , 2011, Comput. Commun..

[3]  Georg Carle,et al.  Network engineering for real-time networks: comparison of automotive and aeronautic industries approaches , 2016, IEEE Communications Magazine.

[4]  Matthew J. Turk,et al.  Computing Environments for Reproducibility: Capturing the "Whole Tale" , 2018, Future Gener. Comput. Syst..

[5]  Markus Rupp,et al.  Reproducible research in signal processing , 2009, IEEE Signal Processing Magazine.

[6]  Jörg Ott,et al.  Challenges with Reproducibility , 2017, Reproducibility@SIGCOMM.

[7]  Tristan Henderson,et al.  CRAWDAD: a community resource for archiving wireless data at Dartmouth , 2005, CCRV.

[8]  Henning Schulzrinne Networking Research - A Reflection in the Middle Years , 2018, Comput. Commun..

[9]  Randy Bush,et al.  Measurement Vantage Point Selection Using A Similarity Metric , 2017, ANRW.

[10]  Georg Carle,et al.  Towards an Ecosystem for Reproducible Research in Computer Networking , 2017, Reproducibility@SIGCOMM.

[11]  Marco Ajmone Marsan,et al.  Experience: An Open Platform for Experimentation with Commercial Mobile Broadband Networks , 2017, MobiCom.

[12]  Diana Andreea Popescu,et al.  Reproducing Network Experiments in a Time-controlled Emulation Environment , 2016, TMA.

[13]  Jörg Ott,et al.  Global Measurements: Practice and Experience (Report on Dagstuhl Seminar #16012) , 2016, CCRV.

[14]  Martina Zitterbart,et al.  Taming the Complexity of Artifact Reproducibility , 2017, Reproducibility@SIGCOMM.

[15]  Luigi Iannone,et al.  Thoughts and Recommendations from the ACM SIGCOMM 2017 Reproducibility Workshop , 2018, CCRV.

[16]  Andra Lutu,et al.  Open collaborative hyperpapers: a call to action , 2019, CCRV.

[17]  Ítalo S. Cunha,et al.  Towards a Rigorous Methodology for Measuring Adoption of RPKI Route Validation and Filtering , 2017, CCRV.

[18]  Jon Crowcroft,et al.  Raft Refloated: Do We Have Consensus? , 2015, OPSR.

[19]  Vern Paxson,et al.  Strategies for sound internet measurement , 2004, IMC '04.

[20]  Anton Nekrutenko,et al.  Ten Simple Rules for Reproducible Computational Research , 2013, PLoS Comput. Biol..