Publication bias and the canonization of false facts

Science is facing a “replication crisis” in which many experimental findings cannot be replicated and are likely to be false. Does this imply that many scientific facts are false as well? To find out, we explore the process by which a claim becomes fact. We model the community’s confidence in a claim as a Markov process with successive published results shifting the degree of belief. Publication bias in favor of positive findings influences the distribution of published results. We find that unless a sufficient fraction of negative results are published, false claims frequently can become canonized as fact. Data-dredging, p-hacking, and similar behaviors exacerbate the problem. Should negative results become easier to publish as a claim approaches acceptance as a fact, however, true and false claims would be more readily distinguished. To the degree that the model reflects the real world, there may be serious concerns about the validity of purported facts in some disciplines. DOI: http://dx.doi.org/10.7554/eLife.21451.001

[1]  Jon Grahe,et al.  Go forth and replicate! , 2016, Nature.

[2]  Marcus R. Munafò,et al.  Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions , 2016, PLoS biology.

[3]  Hans Knutsson,et al.  Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates , 2016, Proceedings of the National Academy of Sciences.

[4]  G. Guyatt,et al.  Agreements between Industry and Academia on Publication Rights: A Retrospective Study of Protocols and Publications of Randomized Clinical Trials , 2016, PLoS medicine.

[5]  M. Baker 1,500 scientists lift the lid on reproducibility , 2016, Nature.

[6]  Gideon Nave,et al.  Evaluating replicability of laboratory experiments in economics , 2016, Science.

[7]  Janina Decker,et al.  Scientific Knowledge And Its Social Problems , 2016 .

[8]  J. Schreiber Foundations Of Statistics , 2016 .

[9]  Benedikt V. Ehinger,et al.  Faculty Opinions recommendation of PSYCHOLOGY. Estimating the reproducibility of psychological science. , 2015 .

[10]  C. Tufanaru,et al.  Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence , 2015, BMJ : British Medical Journal.

[11]  Phillip Li,et al.  Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say 'Usually Not' , 2015 .

[12]  Michael C. Frank,et al.  Estimating the reproducibility of psychological science , 2015, Science.

[13]  Paul E. Smaldino,et al.  Replication, Communication, and the Population Dynamics of Scientific Discovery , 2015, PloS one.

[14]  R. Lanfear,et al.  The Extent and Consequences of P-Hacking in Science , 2015, PLoS biology.

[15]  Brian A. Nosek,et al.  An open investigation of the reproducibility of cancer biology research , 2014, eLife.

[16]  David B. Allison,et al.  F1000Prime recommendation of Social science. Publication bias in the social sciences: unlocking the file drawer. , 2014 .

[17]  Neil Malhotra,et al.  Publication bias in the social sciences: Unlocking the file drawer , 2014, Science.

[18]  Kristian Thorlund,et al.  Reanalyses of randomized clinical trial data. , 2014, JAMA.

[19]  Natalie Matosin,et al.  Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture , 2014, Disease Models & Mechanisms.

[20]  Michèle B. Nuijten,et al.  Why Publishing Everything Is More Effective than Selective Publishing of Statistically Significant Results , 2014, PloS one.

[21]  A. Gelman,et al.  The statistical crisis in science , 2014 .

[22]  Riender Happee,et al.  Why Selective Publication of Statistically Significant Results Can Be Effective , 2013, PloS one.

[23]  Samuel Arbesman,et al.  The Half-life of Facts: Why Everything We Know Has an Expiration Date , 2012 .

[24]  Anna Kuchment The Half-Life of Facts , 2012 .

[25]  Leif D. Nelson,et al.  Let's Publish Fewer Papers , 2012 .

[26]  C. Begley,et al.  Drug development: Raise standards for preclinical cancer research , 2012, Nature.

[27]  Anna Kuchment,et al.  The half-life of facts: Why everything we know has an expiration date. , 2012 .

[28]  Daniele Fanelli,et al.  Negative results are disappearing from most disciplines and countries , 2011, Scientometrics.

[29]  R. Rosenthal,et al.  Selective publication of antidepressant trials and its influence on apparent efficacy. , 2008, The New England journal of medicine.

[30]  W. Filipowicz,et al.  Role of Dicer in posttranscriptional RNA silencing. , 2008, Current topics in microbiology and immunology.

[31]  Andrey Rzhetsky,et al.  Microparadigms: chains of collective reasoning in publications about molecular interactions. , 2006, Proceedings of the National Academy of Sciences of the United States of America.

[32]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[33]  Thomas A Trikalinos,et al.  Early extreme contradictory estimates may appear in published research: the Proteus phenomenon in molecular genetics research and randomized trials. , 2005, Journal of clinical epidemiology.

[34]  D. Altman,et al.  Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors , 2005, BMJ : British Medical Journal.

[35]  D. Kripke Selective publication. , 2005, Sleep.

[36]  Shi Shejun,et al.  On the Open of Investigation , 2004 .

[37]  J. Knight Negative results: Null and void , 2003, Nature.

[38]  S. Ebrahim,et al.  Data dredging, bias, or confounding , 2002, BMJ : British Medical Journal.

[39]  D. Rennie,et al.  Publication bias in editorial decision making. , 2002, JAMA.

[40]  M. Jennions,et al.  Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution , 2002, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[41]  L. HARKing: Hypothesizing After the Results are Known , 2002 .

[42]  J. Reggia,et al.  Go forth and replicate. , 2001, Scientific American.

[43]  A. Caudy,et al.  Role for a bidentate ribonuclease in the initiation step of RNA interference , 2001 .

[44]  A. Palmer,et al.  QUASIREPLICATION AND THE CONTRACT OF ERROR: Lessons from Sex Ratios, Heritabilities and Fluctuating Asymmetry , 2000 .

[45]  Alex J. Sutton,et al.  Publication and related biases: a review , 2000 .

[46]  R. Poulin,et al.  Manipulation of host behaviour by parasites: a weakening paradigm? , 2000, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[47]  A J Sutton,et al.  Publication and related biases. , 2000, Health technology assessment.

[48]  S. Golder,et al.  The effectiveness and cost-effectiveness of prophylactic removal of wisdom teeth. , 2000, Health technology assessment.

[49]  J. L. Tomkins,et al.  Fluctuating paradigm , 1999, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[50]  N. Kerr HARKing: Hypothesizing After the Results are Known , 1998, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[51]  Kevin McConway,et al.  Does publication bias lead to biased science , 1997 .

[52]  Ryan D. Csada,et al.  The "File Drawer Problem" of Non-Significant Results: Does It Apply to Biological Research? , 1996 .

[53]  I. Tannock,et al.  False-positive results in clinical trials: multiple significance tests and the problem of unreported comparisons. , 1996, Journal of the National Cancer Institute.

[54]  K. Dickersin,et al.  Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. , 1992, JAMA.

[55]  P. Easterbrook,et al.  Publication bias in clinical research , 1991, The Lancet.

[56]  K. Dickersin The existence of publication bias and risk factors for its occurrence. , 1990, JAMA.

[57]  N. Sahlin Weight of the Value of Knowledge , 1990 .

[58]  C. Begg,et al.  Publication bias : a problem in interpreting medical data , 1988 .

[59]  Wiebe E. Bijker,et al.  Science in action : how to follow scientists and engineers through society , 1989 .

[60]  R G Newcombe,et al.  Towards a reduction in publication bias. , 1987, British medical journal.

[61]  S. Pocock,et al.  Statistical problems in the reporting of clinical trials. A survey of three medical journals. , 1987, The New England journal of medicine.

[62]  R. Rosenthal The file drawer problem and tolerance for null results , 1979 .

[63]  M. Gowing Scientific Knowledge and its Social Problems , 1974, The British Journal for the History of Science.

[64]  J. McConnell Scientific Knowledge and Its Social Problems , 1972 .

[65]  D. Vere-Jones Markov Chains , 1972, Nature.

[66]  J. Ravetz Scientific Knowledge and Its Social Problems , 2020 .

[67]  R. F. Faull Economics of Research , 1970 .

[68]  I. Good On the Principle of Total Evidence , 1967 .

[69]  T. Sterling Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa , 1959 .