Examining the Reproducibility of 6 Published Studies in Public Health Services and Systems Research

Objective: Research replication, or repeating a study de novo, is the scientific standard for building evidence and identifying spurious results. While replication is ideal, it is often expensive and time consuming. Reproducibility, or reanalysis of data to verify published findings, is one proposed minimum alternative standard. While a lack of research reproducibility has been identified as a serious and prevalent problem in biomedical research and a few other fields, little work has been done to examine the reproducibility of public health research. We examined reproducibility in 6 studies from the public health services and systems research subfield of public health research. Design: Following the methods described in each of the 6 papers, we computed the descriptive and inferential statistics for each study. We compared our results with the original study results and examined the percentage differences in descriptive statistics and differences in effect size, significance, and precision of inferential statistics. All project work was completed in 2017. Results: We found consistency between original and reproduced results for each paper in at least 1 of the 4 areas examined. However, we also found some inconsistency. We identified incorrect transcription of results and omitting detail about data management and analyses as the primary contributors to the inconsistencies. Recommendations: Increasing reproducibility, or reanalysis of data to verify published results, can improve the quality of science. Researchers, journals, employers, and funders can all play a role in improving the reproducibility of science through several strategies including publishing data and statistical code, using guidelines to write clear and complete methods sections, conducting reproducibility reviews, and incentivizing reproducible science.

[1]  Comité Internacional de Editores de Revistas Médicas,et al.  Uniform requirements for manuscripts submitted to biomedical journals: Writing and editing for biomedical publication , 2010, Journal of pharmacology & pharmacotherapeutics.

[2]  John P. A. Ioannidis,et al.  How to Make More Published Research True , 2014, PLoS medicine.

[3]  E. García‐Berthou,et al.  Incongruence between test statistics and P values in medical papers , 2004 .

[4]  Michael C. Frank,et al.  Response to Comment on “Estimating the reproducibility of psychological science” , 2016, Science.

[5]  Ning Jiang,et al.  Our path to better science in less time using open data science tools , 2017, Nature Ecology &Evolution.

[6]  R. Brownson,et al.  Learning About and Using Research Evidence Among Public Health Practitioners. , 2017, American journal of preventive medicine.

[7]  F. Dominici,et al.  Reproducible epidemiologic research. , 2006, American journal of epidemiology.

[8]  R. Steen Retractions in the scientific literature: is the incidence of research fraud increasing? , 2010, Journal of Medical Ethics.

[9]  H. Park Comparing Group Means: T-tests and One-way ANOVA Using Stata, SAS, R, and SPSS , 2009 .

[10]  Michael C. Frank,et al.  Estimating the reproducibility of psychological science , 2015, Science.

[11]  Jenine K Harris,et al.  Methods in public health services and systems research: a systematic review. , 2012, American journal of preventive medicine.

[12]  Trish Groves,et al.  Enhancing the quality and transparency of health research , 2008, BMJ : British Medical Journal.

[13]  Lutz Bornmann,et al.  Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references , 2014, J. Assoc. Inf. Sci. Technol..

[14]  Jingfeng Xia,et al.  Who publishes in “predatory” journals? , 2015, J. Assoc. Inf. Sci. Technol..

[15]  P. Halverson,et al.  Making public health improvement real: the vital role of systems research. , 2004, Journal of public health management and practice : JPHMP.

[16]  F. Prinz,et al.  Believe it or not: how much can we rely on published data on potential drug targets? , 2011, Nature Reviews Drug Discovery.

[17]  R. Tibshirani,et al.  Increasing value and reducing waste in research design, conduct, and analysis , 2014, The Lancet.

[18]  A. Casadevall,et al.  Misconduct accounts for the majority of retracted scientific publications , 2012, Proceedings of the National Academy of Sciences.

[19]  R Grant Steen,et al.  Retractions in the medical literature: how many patients are put at risk by flawed research? , 2011, Journal of Medical Ethics.

[20]  M. H. MacRoberts,et al.  Problems of citation analysis: A study of uncited and seldom-cited influences , 2010 .

[21]  C. Glenn Begley,et al.  Raise standards for preclinical cancer research , 2012 .

[22]  M. Artés Statistical errors. , 1977, Medicina clinica.

[23]  Michèle B. Nuijten,et al.  The prevalence of statistical reporting errors in psychology (1985–2013) , 2015, Behavior Research Methods.

[24]  H. Park Linear Regression Models for Panel Data Using SAS, Stata, LIMDEP, and SPSS , 2015 .

[25]  R. Peng Reproducible Research in Computational Science , 2011, Science.

[26]  Gideon Nave,et al.  Evaluating replicability of laboratory experiments in economics , 2016, Science.