Result-Blind Peer Reviews and Editorial Decisions A Missing Pillar of Scientific Culture

The present article suggests a possible way to reduce the file drawer problem in scientific research (Rosenthal, 1978, 1979), that is, the tendency for ''nonsignificant'' results to remain hidden in scientists' file drawers because both authors and journals strongly prefer statistically significant results. We argue that peer-reviewed journals based on the principle of rigorous evaluation of research proposals before results are known would address this problem successfully. Even a single journal adopting a result-blind evaluation policy would remedy the persisting problem of publication bias more efficiently than other tools and techniques suggested so far. We also propose an ideal editorial policy for such a journal and discuss pragmatic implications and potential problems associated with this policy. Moreover, we argue that such a journal would be a valuable addition to the scientific publication outlets, because it supports a scientific culture encouraging the publication of well-designed and technically sound empirical research irrespective of the results obtained. Finally, we argue that such a journal would be attractive for scientists, publishers, and research agencies.

[1]  I. Olkin,et al.  Models for estimating the number of unpublished studies. , 1996, Statistics in medicine.

[2]  Julien Mayor,et al.  Are Scientists Nearsighted Gamblers? The Misleading Nature of Impact Factors , 2010, Front. Psychology.

[3]  R. Crandall Improving editorial procedures. , 1990 .

[4]  E. Yong Nobel laureate challenges psychologists to clean up their act , 2012, Nature.

[5]  A. Kühberger,et al.  A comprehensive review of reporting practices in psychological journals: Are effect sizes really enough? , 2013 .

[6]  G. William Walster,et al.  A Proposal for a New Editorial Policy in the Social Sciences , 1970 .

[7]  A. Tversky,et al.  Judgment under uncertainty: Judgment under uncertainty: Heuristics and biases , 1982 .

[8]  The traditional editorial statement. , 1990 .

[9]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[10]  J. Losee A historical introduction to the philosophy of science , 1972 .

[11]  We Knew the Future All Along , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[12]  Joel Kupfersmid,et al.  Improving what is published: A model in search of an editor. , 1988 .

[13]  S. Fiedler,et al.  Is there evidence of publication biases in JDM research? , 2011, Judgment and Decision Making.

[14]  Alexander Grob Editor-in-Chief Letter from the New Editor-in-Chief , 2010 .

[15]  G. Gigerenzer,et al.  Do studies of statistical power have an effect on the power of studies , 1989 .

[16]  A. Greenwald Consequences of Prejudice Against the Null Hypothesis , 1975 .

[17]  G. Gigerenzer,et al.  The null ritual : What you always wanted to know about significance testing but were afraid to ask , 2004 .

[18]  Barbara A. Spellman,et al.  Introduction to the Special Section , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[19]  J. Schooler Unpublished results hide the decline effect , 2011, Nature.

[20]  Jeffrey N. Rouder,et al.  Bayesian t tests for accepting and rejecting the null hypothesis , 2009, Psychonomic bulletin & review.

[21]  A. Palmer,et al.  Detecting Publication Bias in Meta‐analyses: A Case Study of Fluctuating Asymmetry and Sexual Selection , 1999, The American Naturalist.

[22]  Joel B. Greenhouse,et al.  Selection Models and the File Drawer Problem , 1988 .

[23]  Michael D. Lee,et al.  prep misestimates the probability of replication , 2009, Psychonomic bulletin & review.

[24]  R. Rosenthal The file drawer problem and tolerance for null results , 1979 .

[25]  Jacob Cohen The earth is round (p < .05) , 1994 .

[26]  Edgar Erdfelder,et al.  Experimental psychology: a note on statistical analysis. , 2010, Experimental psychology.

[27]  Jacob Cohen,et al.  The statistical power of abnormal-social psychological research: a review. , 1962, Journal of abnormal and social psychology.

[28]  Christopher H Schmid,et al.  In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias. , 2005, Journal of clinical epidemiology.

[29]  P. Killeen,et al.  An Alternative to Null-Hypothesis Significance Tests , 2005, Psychological science.

[30]  R. Rosenthal Combining results of independent studies. , 1978 .

[31]  I. Olkin,et al.  The case of the misleading funnel plot , 2006, BMJ : British Medical Journal.

[32]  T. Sterling Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa , 1959 .

[33]  J. Wicherts,et al.  The (mis)reporting of statistical results in psychology journals , 2011, Behavior research methods.

[34]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[35]  Leland Wilkinson,et al.  Statistical Methods in Psychology Journals Guidelines and Explanations , 2005 .

[36]  U. Schimmack The ironic effect of significant results on the credibility of multiple-study articles. , 2012, Psychological methods.

[37]  Bruce Thompson,et al.  The pivotal role of replication in psychological research , 1994 .

[38]  Jeffrey D. Scargle,et al.  Publication Bias: The “File-Drawer” Problem in Scientific Inference , 2000 .

[39]  Henry L. Roediger,et al.  Psychology’s Woes and a Partial Cure: The Value of Replication , 2012 .

[40]  M. Kendall,et al.  The Logic of Scientific Discovery. , 1959 .

[41]  K. Popper,et al.  The Logic of Scientific Discovery , 1960 .