A decentralized framework for cultivating research lifecycle transparency

Research transparency has been advocated as a key means of addressing the current crisis of reproducibility. This article proposes an enhanced form of research transparency, termed lifecycle transparency. Over the entire lifecycle of a research effort, this approach captures the syntactical contexts of artifacts and stakeholders, such as timestamps, agreements, and/or dependency requirements for completing each research phase. For example, such contexts might include when, where, and from whom patients’ consent and institutional review board approvals were received before a clinical trial was carried out. However, as existing open-science tools are often dedicated to certain research phases or disciplines, and thus insufficient to support lifecycle transparency, we propose a novel decentralized framework to serve as a common medium for interaction among open-science tools, and produces irrefutable and immutable proofs of progress that can be verified automatically.

[1]  Christopher D. Chambers,et al.  Transparency and Openness Promotion (TOP) Guidelines , 2014 .

[2]  A. Tversky,et al.  Subjective Probability: A Judgment of Representativeness , 1972 .

[3]  L. HARKing: Hypothesizing After the Results are Known , 2002 .

[4]  Iván Marín Franch Publication bias and the chase for statistical significance , 2018 .

[5]  Iván Marín-Franch,et al.  Publication bias and the chase for statistical significance , 2018, Journal of optometry.

[6]  Philippe Ravaud,et al.  From Clinical Trials to Highly Trustable Clinical Trials: Blockchain in Clinical Trials, a Game Changer for Improving Transparency? , 2019, Front. Blockchain.

[7]  Jeffrey N Rouder,et al.  The what, why, and how of born-open data , 2015, Behavior Research Methods.

[8]  J. Ioannidis,et al.  Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017 , 2018, PLoS biology.

[9]  D. Borsboom,et al.  The poor availability of psychological research data for reanalysis. , 2006, The American psychologist.

[10]  John P. A. Ioannidis,et al.  Mapping the universe of registered reports , 2018, Nature Human Behaviour.

[11]  I. Cockburn,et al.  The Economics of Reproducibility in Preclinical Research , 2015, PLoS biology.

[12]  Victoria Stodden,et al.  Enabling the Verification of Computational Results: An Empirical Evaluation of Computational Reproducibility , 2018, Proceedings of the First International Workshop on Practical Reproducible Evaluation of Computer Systems.

[13]  J. Fox,et al.  The uncertain relationship between transparency and accountability , 2007 .

[14]  Kai Lin,et al.  Introducing the Open Science Chain: Protecting Integrity and Provenance of Research Data , 2019, PEARC.

[15]  Megan A. K. Peters,et al.  Perceptual confidence neglects decision-incongruent evidence in the brain , 2017, Nature Human Behaviour.

[16]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[17]  Mike Thelwall,et al.  Is useful research data usually shared? An investigation of genome-wide association study summary statistics , 2019, bioRxiv.

[18]  D L DeMets,et al.  Distinctions between fraud, bias, errors, misunderstanding, and incompetence. , 1997, Controlled clinical trials.

[19]  Brian A. Nosek,et al.  Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.

[20]  J. Mercer,et al.  Research Misconduct , 2019, Ethics in Psychological Research: A Practical Guide for the Student Scientist.

[21]  Promoting reproducibility with registered reports , 2017, Nature Human Behaviour.

[22]  Victoria Stodden,et al.  An empirical analysis of journal policy effectiveness for computational reproducibility , 2018, Proceedings of the National Academy of Sciences.

[23]  Krzysztof Janowicz,et al.  On the prospects of blockchain and distributed ledger technologies for open science and academic publishing , 2018, Semantic Web.

[24]  Monya Baker,et al.  First results from psychology’s largest reproducibility test , 2015, Nature.

[25]  Jaime A. Teixeira da Silva,et al.  Multiple Authorship in Scientific Manuscripts: Ethical Challenges, Ghost and Guest/Gift Authorship, and the Cultural/Disciplinary Perspective , 2016, Sci. Eng. Ethics.

[26]  Paul Gustafson,et al.  Conditional equivalence testing: An alternative remedy for publication bias , 2017, PloS one.