A Systems Approach to Understanding and Improving Research Integrity

Concern about the integrity of empirical research has arisen in recent years in the light of studies showing the vast majority of publications in academic journals report positive results, many of these results are false and cannot be replicated, and many positive results are the product of data dredging and the application of flexible data analysis practices coupled with selective reporting. While a number of potential solutions have been proposed, the effects of these are poorly understood and empirical evaluation of each would take many years. We propose that methods from the systems sciences be used to assess the effects, both positive and negative, of proposed solutions to the problem of declining research integrity such as study registration, Registered Reports, and open access to methods and data. In order to illustrate the potential application of systems science methods to the study of research integrity, we describe three broad types of models: one built on the characteristics of specific academic disciplines; one a diffusion of research norms model conceptualizing researchers as susceptible, “infected” and recovered; and one conceptualizing publications as a product produced by an industry comprised of academics who respond to incentives and disincentives.

[1]  Michael C. Frank,et al.  Response to Comment on “Estimating the reproducibility of psychological science” , 2016, Science.

[2]  Joachim Vandekerckhove,et al.  A test of the diffusion model explanation for the worst performance rule using preregistration and blinding , 2017, Attention, Perception, & Psychophysics.

[3]  Paul E. Smaldino,et al.  Replication, Communication, and the Population Dynamics of Scientific Discovery , 2015, PloS one.

[4]  Michael A. Clemens,et al.  The Meaning of Failed Replications: A Review and Proposal , 2015, SSRN Electronic Journal.

[5]  V. Johnson Revised standards for statistical evidence , 2013, Proceedings of the National Academy of Sciences.

[6]  David B Allison,et al.  White Hat Bias: Examples of its Presence in Obesity Research and a Call for Renewed Commitment to Faithfulness in Research Reporting , 2009, International Journal of Obesity.

[7]  Stephen E. Fienberg,et al.  Self-correction in science at work , 2015, Science.

[8]  Brian A. Nosek,et al.  Promoting an open research culture , 2015, Science.

[9]  S. Muthukumaraswamy,et al.  Instead of "playing the game" it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond , 2014 .

[10]  John P. A. Ioannidis,et al.  Research: increasing value, reducing waste 2 , 2014 .

[11]  Natalia Juristo Juzgado,et al.  An External Replication on the Effects of Test-driven Development Using a Multi-site Blind Analysis Approach , 2016, ESEM.

[12]  Eunyoung Kim,et al.  Assessment of adherence to the CONSORT statement for quality of reports on randomized controlled trial abstracts from four high-impact general medical journals , 2012, Trials.

[13]  John P. A. Ioannidis,et al.  What does research reproducibility mean? , 2016, Science Translational Medicine.

[14]  D. Kahneman,et al.  Do Frequency Representations Eliminate Conjunction Effects? An Exercise in Adversarial Collaboration , 2001, Psychological science.

[15]  R. Tibshirani,et al.  Increasing value and reducing waste in research design, conduct, and analysis , 2014, The Lancet.

[16]  J. Coyne,et al.  Are we witnessing the decline effect in the Type D personality literature? What can be learned? , 2012, Journal of psychosomatic research.

[17]  Brian A. Nosek,et al.  Scientific Utopia , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[18]  Bruna de Paula Fonseca e Fonseca,et al.  Co-authorship network analysis in health research: method and potential use , 2016, Health Research Policy and Systems.

[19]  Dennis M. Gorman,et al.  Systems Theory in Public Health , 2014 .

[20]  Thomas Keil,et al.  Using Co-authorship Networks to Map and Analyse Global Neglected Tropical Disease Research with an Affiliation to Germany , 2015, PLoS neglected tropical diseases.

[21]  J. Sterman Learning from evidence in a complex world. , 2006, American journal of public health.

[22]  Joshua M. Epstein,et al.  Coupled Contagion Dynamics of Fear and Disease: Mathematical and Computational Explorations , 2007, PloS one.

[23]  D. Fanelli Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data , 2010, PloS one.

[24]  John D. Sterman,et al.  Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution , 1999 .

[25]  H. Holder,et al.  Prevention programs in the 21st century: what we do not discuss in public. , 2010, Addiction.

[26]  Brian A. Nosek,et al.  Registered Reports A Method to Increase the Credibility of Published Results , 2014 .

[27]  Daniele Fanelli,et al.  Negative results are disappearing from most disciplines and countries , 2011, Scientometrics.

[28]  Daniele Fanelli,et al.  Set up a ‘self-retraction’ system for honest errors , 2016, Nature.

[29]  M. Edwards,et al.  Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition , 2017, Environmental engineering science.

[30]  J. Ioannidis,et al.  Reproducibility in Science: Improving the Standard for Basic and Preclinical Research , 2015, Circulation research.

[31]  Aaron Roodman,et al.  Blind Analysis in Nuclear and Particle Physics , 2005 .

[32]  Michael A. Clemens,et al.  The Meaning of Failed Replications: A Review and Proposal , 2015, Social Science Research Network.

[33]  Timothy D. Wilson,et al.  Comment on “Estimating the reproducibility of psychological science” , 2016, Science.

[34]  Sachin Satpute,et al.  Assessment of adherence to the statistical components of consolidated standards of reporting trials statement for quality of reports on randomized controlled trials from five pharmacology journals , 2016, Perspectives in clinical research.

[35]  D. Luke,et al.  Systems science methods in public health: dynamics, networks, and agents. , 2012, Annual review of public health.

[36]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[37]  Rachel Croson,et al.  Step return versus net reward in the voluntary provision of a threshold public good: An adversarial collaboration , 2008 .

[38]  J. Vandekerckhove,et al.  A Bayesian Perspective on the Reproducibility Project: Psychology , 2016, PloS one.

[39]  Philippe Ravaud,et al.  Complex systems approach to scientific publication and peer-review system: development of an agent-based model calibrated with empirical journal data , 2015, Scientometrics.

[40]  Dennis M. Gorman,et al.  The decline effect in evaluations of the impact of the Strengthening Families Program for Youth 10-14 (SFP 10-14) on adolescent substance use , 2017 .

[41]  Nicole A. Lazar,et al.  ASA Statement on Statistical Significance and p-Values , 2020 .

[42]  N. Kerr HARKing: Hypothesizing After the Results are Known , 1998, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[43]  D. Moher,et al.  CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials , 2010, Journal of clinical epidemiology.

[44]  Richard Wiseman,et al.  Of two minds: Sceptic-proponent collaboration within parapsychology. , 2006, British journal of psychology.

[45]  John P A Ioannidis,et al.  Scientific inbreeding and same-team replication: type D personality as an example. , 2012, Journal of psychosomatic research.

[46]  Steven C. Moore,et al.  Metabolomics and breast cancer: scaling up for robust results , 2020, BMC Medicine.

[47]  Hazhir Rahmandad,et al.  Heterogeneity and Network Structure in the Dynamics of Diffusion: Comparing Agent-Based and Differential Equation Models , 2004, Manag. Sci..

[48]  P. O'Malley,et al.  Reporting quality of randomised controlled trial abstracts among high-impact general medical journals: a review and analysis , 2016, BMJ Open.

[49]  B. Beermann,et al.  Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications , 2003, BMJ : British Medical Journal.

[50]  Dennis M. Gorman Evidence-Based Practice as a Driver of Pseudoscience in Prevention Research , 2018 .

[51]  Richard McElreath,et al.  The natural selection of bad science , 2016, Royal Society Open Science.

[52]  Brian A. Nosek,et al.  Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention , 2014, Trends in Cognitive Sciences.

[53]  L. Bettencourt,et al.  The power of a good idea: Quantitative modeling of the spread of ideas from epidemiological models , 2005, physics/0502067.

[54]  Michael C. Frank,et al.  Estimating the reproducibility of psychological science , 2015, Science.

[55]  E. Wagenmakers,et al.  The effect of horizontal eye movements on free recall: a preregistered adversarial collaboration. , 2015, Journal of experimental psychology. General.

[56]  Macartan Humphreys,et al.  Fishing, Commitment, and Communication: A Proposal for Comprehensive Nonbinding Research Registration , 2012, Political Analysis.

[57]  Despina Koletsi,et al.  Outcome Discrepancies and Selective Reporting: Impacting the Leading Journals? , 2015, PloS one.

[58]  John P A Ioannidis,et al.  Evidence-based medicine has been hijacked: a report to David Sackett. , 2016, Journal of clinical epidemiology.

[59]  D. Fanelli How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data , 2009, PloS one.

[60]  J. Ioannidis,et al.  When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment , 2016, bioRxiv.

[61]  Saul Perlmutter,et al.  Blind analysis: Hide results to seek the truth , 2015, Nature.

[62]  J. Ioannidis,et al.  The False-positive to False-negative Ratio in Epidemiologic Studies , 2011, Epidemiology.

[63]  Pamela I. Clark,et al.  Greater Than the Sum: Systems Thinking in Tobacco control , 2007 .

[64]  David Moher,et al.  The REporting of Studies Conducted Using Observational Routinely-Collected Health Data (RECORD) Statement: Methods for Arriving at Consensus and Developing Reporting Guidelines , 2015, PloS one.

[65]  D. Kahneman,et al.  Testing competing models of loss aversion: an adversarial collaboration , 2005 .

[66]  D. Rennie,et al.  Influences on the Quality of Published Drug Studies , 1996, International Journal of Technology Assessment in Health Care.

[67]  Daniele Fanelli,et al.  Redefine misconduct as distorted reporting , 2013, Nature.

[68]  Ralph Levine,et al.  Using system dynamics modeling to understand the impact of social change initiatives , 2007, American journal of community psychology.

[69]  John D. Sterman,et al.  System Dynamics: Systems Thinking and Modeling for a Complex World , 2002 .

[70]  C. Ferguson,et al.  A Vast Graveyard of Undead Theories , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[71]  Dennis M Gorman,et al.  Has the National Registry of Evidence-based Programs and Practices (NREPP) lost its way? , 2017, The International journal on drug policy.

[72]  Brian A. Nosek,et al.  Promoting Transparency in Social Science Research , 2014, Science.

[73]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[74]  John P. A. Ioannidis,et al.  How to Make More Published Research True , 2014, PLoS medicine.

[75]  Daniele Fanelli Positive results receive more citations, but only in some disciplines , 2012, Scientometrics.

[76]  J. Ioannidis,et al.  Public Availability of Published Research Data in High-Impact Journals , 2011, PloS one.

[77]  Melanie Eisner,et al.  No effects in independent prevention trials: can we reject the cynical view? , 2009 .