Objective To improve the performance of the England and Wales large scale multiple statistical surveillance system for infectious disease outbreaks with a view to reducing the number of false reports, while retaining good power to detect genuine outbreaks. Introduction There has been much interest in the use of statistical surveillance systems over the last decade, prompted by concerns over bioterrorism, the emergence of new pathogens such as SARS and swine flu, and the persistent public health problems of infectious disease outbreaks. In the United Kingdom (UK), statistical surveillance methods have been in routine use at the Health Protection Agency (HPA) since the early 1990s and at Health Protection Scotland (HPS) since the early 2000s (1,2). These are based on a simple yet robust quasi-Poisson regression method (1). We revisit the algorithm with a view to improving its performance. Methods We fit a quasi-Poisson regression model to baseline data. One of the limitations of the current algorithm is the small number of baseline weeks used. We propose a simple seasonal adjustment using factors. We extend the model to include a 10-level factor. We fit the trend component always irrespective of its statistical significance. We are concerned that the existing weighting procedure is too drastic. The baseline at a certain week is down-weighted if the standardized Anscombe residual for that week is greater than 1. This condition was chosen empirically to avoid reducing the sensitivity of the system in the presence of large outbreaks in the baselines, but may be increasing the FPR unduly when there are no or only small outbreaks in the baselines. We investigate several other options, including reducing the down-weighting to cases where the Anscombe residuals are greater than 2 or 3. We evaluate a new re-weighting scheme informed by past decisions. Using this adaptive scheme, baseline data where an alarm was flagged are down-weighted to reduce their effect on current predictions. The criterion we use for re-weighting, here, is the value of the exceedance score. Finally, we investigate the validity of the upper threshold values based on the quasi-Poisson model when the data are generated using known negative binomial distributions. Results Our evaluation of the existing algorithm showed that the false positive rate (FPR) is too high. A novel feature of our new models is that they make use of much more baseline data. This resulted in a better estimation of the trend and variance and decreased the FPR. In addition, we found that the trend should always be fitted even when non-significant (or extreme). This decreases the discrepancies in the results when moving from one week to another. The adaptive reweighting scheme was found to give broadly equivalent results to the reweighting method based on scaled Anscombe residuals. Using the latter as in the original HPA method, but with much higher threshold for reweighting decreased the FPR further. Our investigations also suggest that the negative binomial model is a reasonable one, though not ideal in all circumstances. Thus, there is a good case for replacing the quasi-Poisson model with the negative binomial. One of the unusual features of the HPA system is that it is run every week on a database of more than 3300 distinct organisms, which is likely to produce a large number of aberrances. We found that retaining the exceedance score approach based on the 0.995 quantile is perfectly reasonable. This involves ranking aberrant organisms in order of exceedance. Conclusions We have undertaken a thorough evaluation of the HPA’s outbreak detection system based on simulated and real data. The main conclusion from this evaluation is that the FPR is too high, owing to a combination of factors notably excessive down-weighting of high baselines and reliance on too few baseline weeks.
[1]
David J. Spiegelhalter,et al.
Use of the false discovery rate when comparing multiple health care providers.
,
2008,
Journal of clinical epidemiology.
[2]
David Bock,et al.
A review and discussion of prospective statistical surveillance in public health
,
2003
.
[3]
Andrew W. Moore,et al.
Algorithms for rapid outbreak detection: a research synthesis
,
2005,
J. Biomed. Informatics.
[4]
Howard S. Burkom,et al.
Statistical Challenges Facing Early Outbreak Detection in Biosurveillance
,
2010,
Technometrics.
[5]
Nick Andrews,et al.
A Statistical Algorithm for the Early Detection of Outbreaks of Infectious Disease
,
1996
.
[6]
George Gettinby,et al.
PREDICTION OF INFECTIOUS DISEASES:AN EXCEPTION REPORTING SYSTEM
,
2003
.
[7]
Matthias Greiner,et al.
German outbreak of Escherichia coli O104:H4 associated with sprouts.
,
2011,
The New England journal of medicine.
[8]
Michael Höhle,et al.
surveillance: An R package for the monitoring of infectious diseases
,
2007,
Comput. Stat..
[9]
Paul H. Garthwaite,et al.
Statistical methods for the prospective detection of infectious disease outbreaks: a review
,
2012
.
[10]
David J. Spiegelhalter,et al.
Statistical methods for healthcare regulation: rating, screening and surveillance
,
2012
.
[11]
Stephen E. Fienberg,et al.
Discussion on the paper by Spiegelhalter, Sherlaw-Johnson, Bardsley, Blunt, Wood and Grigg
,
2012
.