Lessons Learned: Recommendations for Establishing Critical Periodic Scientific Benchmarking

The dependence of life scientists on software has steadily grown in recent years. For many tasks, researchers have to decide which of the available bioinformatics software are more suitable for their specific needs. Additionally researchers should be able to objectively select the software that provides the highest accuracy, the best efficiency and the highest level of reproducibility when integrated in their research projects. Critical benchmarking of bioinformatics methods, tools and web services is therefore an essential community service, as well as a critical component of reproducibility efforts. Unbiased and objective evaluations are challenging to set up and can only be effective when built and implemented around community driven efforts, as demonstrated by the many ongoing community challenges in bioinformatics that followed the success of CASP. Community challenges bring the combined benefits of intense collaboration, transparency and standard harmonization. Only open systems for the continuous evaluation of methods offer a perfect complement to community challenges, offering to larger communities of users that could extend far beyond the community of developers, a window to the developments status that they can use for their specific projects. We understand by continuous evaluation systems as those services which are always available and periodically update their data and/or metrics according to a predefined schedule keeping in mind that the performance has to be always seen in terms of each research domain. We argue here that technology is now mature to bring community driven benchmarking efforts to a higher level that should allow effective interoperability of benchmarks across related methods. New technological developments allow overcoming the limitations of the first experiences on online benchmarking e.g. EVA. We therefore describe OpenEBench, a novel infra-structure designed to establish a continuous automated benchmarking system for bioinformatics methods, tools and web services. OpenEBench is being developed so as to cater for the needs of the bioinformatics community, especially software developers who need an objective and quantitative way to inform their decisions as well as the larger community of end-users, in their search for unbiased and up-to-date evaluation of bioinformatics methods. As such OpenEBench should soon become a central place for bioinformatics software developers, community-driven benchmarking initiatives, researchers using bioinformatics methods, and funders interested in the result of methods evaluation.

[1]  Julio Saez-Rodriguez,et al.  Crowdsourcing Network Inference: The DREAM Predictive Signaling Network Challenge , 2011, Science Signaling.

[2]  Erik Schultes,et al.  The FAIR Guiding Principles for scientific data management and stewardship , 2016, Scientific Data.

[3]  Alfonso Valencia,et al.  Overview of BioCreAtIvE: critical assessment of information extraction for biology , 2005, BMC Bioinformatics.

[4]  Philip Zimmermann,et al.  PGP source code and internals , 1995 .

[5]  Cathy H. Wu,et al.  UniProt: the Universal Protein knowledgebase , 2004, Nucleic Acids Res..

[6]  José Luís Oliveira,et al.  An automated real-time integration and interoperability framework for bioinformatics , 2015, BMC Bioinformatics.

[7]  Peter Wittenburg,et al.  EUDAT: A New Cross-Disciplinary Data Infrastructure for Science , 2013, Int. J. Digit. Curation.

[8]  Anne E. Trefethen,et al.  Toward interoperable bioscience data , 2012, Nature Genetics.

[9]  Ümit V. Çatalyürek,et al.  Benchmarking short sequence mapping tools , 2013, BMC Bioinformatics.

[10]  Eleanor Jane Budge,et al.  Collective intelligence for translational medicine: Crowdsourcing insights and innovation from an interdisciplinary biomedical research community , 2015, Annals of medicine.

[11]  Adrian M. Altenhoff,et al.  Standardized benchmarking in the quest for orthologs , 2016, Nature Methods.

[12]  J. Thompson,et al.  Issues in bioinformatics benchmarking: the case study of multiple sequence alignment , 2010, Nucleic acids research.

[13]  Robert C. Edgar,et al.  Quality measures for protein alignment benchmarks , 2010, Nucleic acids research.

[14]  Joan Daemen,et al.  AES Proposal : Rijndael , 1998 .

[15]  Sandor Vajda,et al.  CAPRI: A Critical Assessment of PRedicted Interactions , 2003, Proteins.

[16]  John Chilton,et al.  The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2016 update , 2016, Nucleic Acids Res..

[17]  Carl Boettiger,et al.  An introduction to Docker for reproducible research , 2014, OPSR.

[18]  Jonathan M. Keith,et al.  A Bayesian method for comparing and combining binary classifiers in the absence of a gold standard , 2012, BMC Bioinformatics.

[19]  Paolo Di Tommaso,et al.  Nextflow enables reproducible computational workflows , 2017, Nature Biotechnology.

[20]  Cédric Notredame,et al.  Upcoming challenges for multiple sequence alignment methods in the high-throughput era , 2009, Bioinform..

[21]  W F van Gunsteren,et al.  Combined procedure of distance geometry and restrained molecular dynamics techniques for protein structure determination from nuclear magnetic resonance data: Application to the DNA binding domain of lac repressor from Escherichia coli , 1988, Proteins.

[22]  Daniel W. A. Buchan,et al.  A large-scale evaluation of computational protein function prediction , 2013, Nature Methods.

[23]  Simon Heath,et al.  From Wet‐Lab to Variations: Concordance and Speed of Bioinformatics Pipelines for Whole Genome and Whole Exome Sequencing , 2016, Human mutation.

[24]  J C Costello,et al.  Seeking the Wisdom of Crowds Through Challenge‐Based Competitions in Biomedical Research , 2013, Clinical pharmacology and therapeutics.

[25]  V. Marx Biology: The big challenges of big data , 2013, Nature.

[26]  Predrag Radivojac,et al.  Ten Simple Rules for a Community Computational Challenge , 2015, PLoS Comput. Biol..

[27]  Saurabh Sinha,et al.  Towards realistic benchmarks for multiple alignments of non-coding sequences , 2010, BMC Bioinform..

[28]  Juergen Haas,et al.  The Protein Model Portal—a comprehensive resource for protein structure and model information , 2013, Database J. Biol. Databases Curation.

[29]  K Fidelis,et al.  A large‐scale experiment to assess protein structure prediction methods , 1995, Proteins.

[30]  Janet M Thornton,et al.  ELIXIR: a distributed infrastructure for European biological data. , 2012, Trends in biotechnology.

[31]  D Fischer,et al.  LiveBench‐1: Continuous benchmarking of protein structure prediction servers , 2001, Protein science : a publication of the Protein Society.

[32]  Wahidah Husain,et al.  A Survey on Data Integration in Bioinformatics , 2011 .

[33]  Marc A. Martí-Renom,et al.  EVA: evaluation of protein structure prediction servers , 2003, Nucleic Acids Res..

[34]  Olivier Poch,et al.  BAliBASE: a benchmark alignment database for the evaluation of multiple alignment programs , 1999, Bioinform..

[35]  Volker A. Eyrich,et al.  EVA: Large‐scale analysis of secondary structure prediction , 2001, Proteins.

[36]  D Fischer,et al.  CAFASP‐1: Critical assessment of fully automated structure prediction methods , 1999, Proteins.

[37]  S. Friend,et al.  Crowdsourcing biomedical research: leveraging communities as innovation engines , 2016, Nature Reviews Genetics.

[38]  R. Norel,et al.  The self-assessment trap: can we all be better than average? , 2011, Molecular systems biology.

[39]  G. Weinstock,et al.  Creating a honey bee consensus gene set , 2007, Genome Biology.

[40]  John D. Westbrook,et al.  The PDB Format, mmCIF Formats, and Other Data Formats , 2005 .

[41]  Melissa Haendel,et al.  A sea of standards for omics data: sink or swim? , 2013, J. Am. Medical Informatics Assoc..

[42]  Kathleen Marchal,et al.  SynTReN: a generator of synthetic gene expression data for design and analysis of structure learning algorithms , 2006, BMC Bioinformatics.

[43]  Brian McMahon,et al.  Definition and exchange of crystallographic data , 2005 .

[44]  Anne-Laure Boulesteix,et al.  Ten Simple Rules for Reducing Overoptimistic Reporting in Methodological Computational Research , 2015, PLoS Comput. Biol..

[45]  Leszek Rychlewski,et al.  LiveBench‐8: The large‐scale, continuous assessment of automated protein structure prediction , 2005, Protein science : a publication of the Protein Society.

[46]  Sahil R. Kalra,et al.  Big Challenges? Big Data … , 2015 .