The reproducibility “crisis”

The debate over a reproducibility crisis has been simmering for years now, amplified by growing concerns over a number of reproducibility studies that have failed to replicate previous positive results. Additional evidence from larger meta‐analysis of past papers also points to a lack of reproducibility in biomedical research with potentially dire consequences for drug development and investment into research. One of the largest meta‐analyses concluded that low levels of reproducibility, at best around 50% of all preclinical biomedical research, were delaying lifesaving therapies, increasing pressure on research budgets and raising costs of drug development [1]. The paper claimed that about US$28 billion a year was spent largely fruitlessly on preclinical research in the USA alone. ### A problem of statistics However, the assertion that a 50% level of reproducibility equates to a crisis, or that many of the original studies were really fruitless, has been disputed by some specialists in replication. “A 50% level of reproducibility is generally reported as being bad, but that is a complete misconstrual of what to expect”, commented Jeffrey Mogil, who holds the Canada Research Chair in Genetics of Pain at McGill University in Montreal. “There is no way you could expect 100% reproducibility, and if you did, then the studies could not have been very good in the first place. If people could replicate published studies all the time then they could not have been cutting edge and pushing the boundaries”. One reason not to expect 100% reproducibility in preclinical studies is that cutting edge or exploratory research deals with a lot of uncertainty and competing hypotheses of which only a few can be correct. After all, there would be no need to conduct experiments at all if the outcome were completely predictable. For that reason, initial preclinical study cannot be absolutely false or true, but must …