A recent issue of the Royal Statistical Society magazine ‘‘Significance’’ had an interesting article about the human tendency to be over-confident and the authors conclude ‘‘At the very least it is important for decision-makers to be aware that people are prone to overconfidence, and that to assume one is not is to unwittingly fall prey to the bias’’ [1]. From my experience of reviewing medical research articles, I find authors to be very over-confident of the strength of evidence provided by their research. This applies to randomised trials but especially to observational research. In the same issue of Significance, on page 19 ‘‘Dr. Fisher’’ effectively notes this as well, though he describes his change in perspective when moving from author to referee. Being honest, I think it likely that I have been over-confident in my own research or opinion, but I like to think that in my mature years I have become more realistic both as author and as referee! OMOP is an empirically-based project to find good methods for detecting possible new adverse effects of medicines using databases from healthcare organisations. The US Congress has required that the FDA has available 100 million people’s data for post-marketing surveillance. This very idea may show over-confidence in believing that having the data available will mean that real effects will be detected reliably. Overall the papers in this issue show clearly that there is considerable variation in the measures of association between drugs and adverse events. This is true both for those associations believed to be real adverse drug reactions and those believed to be coincidental. There are some problems in being sure of a gold standard, and this is acknowledged in these papers, but even with such issues it is clear that variability is much greater than is captured by a confidence interval or significance test. This has been well known for a long time and the excellent article by Maclure and Schneeweiss [2] sets out 11 domains that can lead to bias (and hence variability beyond sampling error). The first eight relate to the data and methods while the last three occur after the results are set out. Greenland suggests that such multiple biases can and should be modelled in a Bayesian framework [3]. The papers here are an empirical demonstration that variability in results in this context will occur, depending on:
[1]
S. Schneeweiss,et al.
Causation of Bias: The Episcope
,
2001,
Epidemiology.
[2]
Patrick C Waller,et al.
A model for the future conduct of pharmacovigilance
,
2003,
Pharmacoepidemiology and drug safety.
[3]
Sander Greenland,et al.
Multiple‐bias modelling for analysis of observational data
,
2005
.
[4]
J. Avorn,et al.
Increasing Levels of Restriction in Pharmacoepidemiologic Database Studies of Elderly and Comparison With Randomized Trial Results
,
2007,
Medical care.
[5]
M. Rawlins.
De testimonio: on the evidence for decisions about the use of therapeutic interventions
,
2008,
The Lancet.
[6]
Su Golder,et al.
Meta-analyses of Adverse Effects Data Derived from Randomised Controlled Trials as Compared to Observational Studies: Methodological Overview
,
2011,
PLoS medicine.
[7]
Malcolm Maclure,et al.
Design considerations in an active medical product safety monitoring system
,
2012,
Pharmacoepidemiology and drug safety.
[8]
Albert E. Mannes,et al.
I know I'm right! A behavioural view of overconfidence
,
2013
.
[9]
Patrick B. Ryan,et al.
Desideratum for Evidence Based Epidemiology
,
2013,
Drug Safety.
[10]
Uwe Aickelin,et al.
Comparison of algorithms that detect drug side effects using electronic healthcare databases
,
2013,
Soft Comput..
[11]
D. Madigan,et al.
A Comparison of the Empirical Performance of Methods for a Risk Identification System
,
2013,
Drug Safety.