The reporting odds ratio versus the proportional reporting ratio: ‘deuce’

An article published in this issue by Rothman et al. entitled ‘The reporting odds ratio and its advantages over the proportional reporting ratio’ argues that the reporting odds ratio (RORs) is a more valid measure of association when applied to datasets of spontaneous reports of suspected adverse reactions. However, in our view, this paper fails to provide a coherent basis to support its title, conclusions and take home messages. It has brought together two different issues and confused them. The first issue is what measure is to be used to identify associations, and the second is what comparisons are to be made within a database. RORs and proportional reporting ratios (PRRs) are both measures of disproportionality used for the purpose of detecting signals in spontaneous ADR reporting databases. Both are calculated from the same 2 2 tables with the PRR being identical to the calculation of a relative risk (RR) from a cohort study i.e. (a/aþ c)/(b/bþ d) and the ROR identical to the calculation of an odds ratio (OR) from a case-control study i.e. ad/bc. It is well-recognised that these measures will give very similar results providing, as is virtually always the case in this context, a is a small proportion of aþ c and b is a small proportion of bþ d. Effectively this is a similar argument used to show that the OR in a case-control study approximates the RR. Whilst the calculations of PRR and RR, and ROR and OR respectively are identical, it is important to understand that when used in this context they are not meant to actually estimate RR but to assist in efficiently identifying potential drug hazards from often large datasets of spontaneous reports of suspected adverse reactions. A judgment on the validity and utility of these measures should be based on comparison of their sensitivity, specificity and predictive values in signal detection from a real dataset. Since Rothman et al. make only passing reference to a paper published in this Journal in 2002, the reader might be forgiven for assuming that such data do not yet exist. In fact, that 2002 paper made such a comparison and showed clearly that, in practice, there is no important difference between the measures for the purpose for which they are used. However, it was pointed out that there are some minor issues which might influence choice of measure. In particular, making adjustments in a logistic regression analysis is easy with an ROR but the ROR will occasionally be impossible calculate i.e. when b or c is zero whereas the PRR can still be calculated when c is zero (but not when b is zero). Statistical arithmetic is unimportant here and the differences between these measures of association in this setting are of no major significance. The second, and possibly more interesting issue is the choice of a comparison group. This has nothing whatever to do with ORs or RRs. Rothman et al. use a single invented example for illustrative purposes but which is of little relevance to the usual problems faced in signal detection. The authors seem to be trying to demonstrate that inclusion of the data for event B in their tables will bias the estimate of RR for event A when the drug produces a 10-fold reduction in event A. This would be unusual and we doubt that these approaches will be valuable in such a situation. What would be more relevant from their example would be to consider whether or not event B will be affected by event A (i.e. will it lead to a spurious signal being detected?). In that situation, the ROR is actually larger (and therefore more biased?) than the PRR (1.8 vs. 1.7) but still insufficient to raise a strong signal. Thus their example seems merely to illustrate the mathematical inevitability that an ROR will always be further away from 1 than the PRR. However, the precise values of the point estimates derived from such calculations are of