Fighting Publication Bias: Introducing the Negative Results Section

Only data that are available via publications—and, to a certain extent, via presentations at conferences—can contribute to progress in the life sciences. However, it has long been known that a strong publication bias exists, in particular against the publication of data that do not reproduce previously published material or that refute the investigators’ initial hypothesis. The latter type of contradictory evidence is commonly known as ‘negative data.’ This slightly derogatory term reflects the bias against studies in which investigators were unable to reject their null hypothesis (H0), a tool of frequentist statistics that states that there is no difference between experimental groups. Researchers are well aware of this bias, as journals are usually not keen to publish the nonexistence of a phenomenon or treatment effect. They know that editors have little interest in publishing data that refute, or do not reproduce, previously published work—with the exception of spectacular cases that guarantee the attention of the scientific community, as well as garner extra citations (Ioannidis and Trikalinos, 2005). The authors of negative results are required to provide evidence for failure to reject the null hypothesis under numerous conditions (e.g., dosages, assays, outcome parameters, additional species or cell types), whereas a positive result would be considered worthwhile under any of these conditions (Rockwell et al, 2006). Indeed, there is a dilemma: one can never prove the absence of an effect, because, as Altman and Bland (1995) remind us, ‘absence of evidence is not evidence of absence’. It has been demonstrated that studies reporting positive, or significant, results are more likely to be published, and outcomes that are statistically significant have higher odds of being fully reported (Dwan et al, 2008). Negative results are more likely than positive results to be published in journals with lower impact factors (Littner et al, 2005). Many of you have experienced this phenomenon yourselves—often scientists mention in conversation that they ‘were not able to reproduce’ a particular finding, a statement that is very often countered by the question ‘Why did you not publish this? It would have been important for me to know.’ Publication bias has been systematically investigated, particularly in clinical trials (e.g., Liebeskind et al, 2006). Systematic reviews and meta-analyses have exposed the problem, as they are heavily confounded by this phenomenon (Sutton et al, 2000). Given a sufficiently large number of original studies, meta-analysis can even quantify the bias attributable to unpublished data. Where this has been done—for example, with Egger plots and trimand-fill analysis (Duval and Tweedie, 2000)—imputation of the probable results of the unpublished experiments not only reveals the amount of missing data but also estimates ‘true’ effect sizes resulting from the inclusion of the missing data. Quite commonly, a substantial proportion of the existing data appears to be missing. Inclusion of the modeled missing data into the meta-analysis sometimes results in a complete loss of the published effect of an intervention or the existence of a phenomenon. In many cases effect sizes shrink dramatically, hinting at the fact that very often the literature represents the ‘positive’ tip of an iceberg, whereas unpublished data loom below the surface. Such missing data would have the potential to have a significant impact on our pathophysiological understanding or treatment concepts. Only recently have systematic reviews been introduced into experimental medicine. Indeed, the stroke and cerebrovascular fields have pioneered this movement. These systematic reviews have exposed various sources of bias and produced the first indications that publication bias is highly prevalent (Macleod et al, 2004). Macleod and colleagues have now, for the first time, quantified publication bias in animal stroke studies and demonstrated that it leads to major overstatements of efficacy (Sena et al, 2010). The phenomenon of publication bias has long been known and long been bemoaned. Its substantial negative impact on science has been quantified. But how can we improve this lamentable situation, which may contribute greatly to our difficulties in translating bench findings to the bedside? The impetus must now come from the journals and publishers (De Maria, 2004; Diguet et al, 2004; Dirnagl, 2006; Knight, 2003). To our knowledge, only one journal in the neurosciences, Neurobiology of Aging, has thus far formally addressed the problem of negative Journal of Cerebral Blood Flow & Metabolism (2010) 30, 1263–1264 & 2010 ISCBFM All rights reserved 0271-678X/10 $32.00

[1]  Thomas A Trikalinos,et al.  Early extreme contradictory estimates may appear in published research: the Proteus phenomenon in molecular genetics research and randomized trials. , 2005, Journal of clinical epidemiology.

[2]  S. Rockwell,et al.  Publishing Negative Results: The Problem of Publication Bias , 2006, Radiation research.

[3]  David W Howells,et al.  Pooling of Animal Experimental Data Reveals Influence of Study Design and Publication Bias , 2004, Stroke.

[4]  J. Ioannidis,et al.  Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias , 2008, PloS one.

[5]  D G Altman,et al.  Absence of evidence is not evidence of absence. , 1996, Australian veterinary journal.

[6]  D. Howells,et al.  Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy , 2010, PLoS biology.

[7]  J. Brooks Why most published research findings are false: Ioannidis JP, Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece , 2008 .

[8]  F. Mimouni,et al.  Negative results and impact factor: a lesson from neonatology. , 2005, Archives of pediatrics & adolescent medicine.

[9]  B. Olsen,et al.  Editorial: Journal of Negative Results in Biomedicine , 2002, Journal of Negative Results in BioMedicine.

[10]  U. Dirnagl,et al.  Reprint: Good Laboratory Practice: Preventing Introduction of Bias at the Bench , 2009, Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism.

[11]  J. Sayre,et al.  Evidence of publication bias in reporting acute stroke clinical trials , 2006, Neurology.

[12]  E. Bézard,et al.  Rise and fall of minocycline in neuroprotection: need to promote publication of negative results , 2004, Experimental Neurology.

[13]  David R. Jones,et al.  Empirical assessment of effect of publication bias on meta-analyses , 2000, BMJ : British Medical Journal.

[14]  U. Dirnagl Bench to Bedside: The Quest for Quality in Experimental Stroke Research , 2006, Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism.

[15]  Ulrich Dirnagl,et al.  Reprint: Good Laboratory Practice: Preventing Introduction of Bias at the Bench , 2009, Stroke.

[16]  A. DeMaria Publication bias and journals as policemen. , 2004, Journal of the American College of Cardiology.

[17]  J. Knight Negative results: Null and void , 2003, Nature.

[18]  S Duval,et al.  Trim and Fill: A Simple Funnel‐Plot–Based Method of Testing and Adjusting for Publication Bias in Meta‐Analysis , 2000, Biometrics.