Prediction of pandemic influenza

Nougairede et al. [1] critically discussed adverse effects of epidemic modeling in having wrongly advised devastating consequences of a pandemic prior to the actual 2009 pandemic. Especially, the authors emphasized incapability of models to yield correct prediction. As one of modeling experts, I have concerns regarding their interpretations. First, it should be remembered that prediction has two different components: projection and forecasting [2]. A projection is an attempt to describe what would happen under certain assumptions and hypotheses, while a forecast is a quantitative attempt to predict what will happen in the future [3]. Prior to the 2009 pandemic, modeling studies offered projections with ‘what if’ scenarios under various hypotheses of public health interventions (e.g. [4, 5]). It is clear that those studies did not intend to offer quantitatively valid forecasts. If projections and forecasts were mixed up in policymaking (i.e. if the projected numbers seriously influenced policymaking as if they were forecasts), any troubles in relevant policy should not be attributed to modeling studies, but rather, to communications between experts and policymakers. Second, any prediction effort requires a baseline input of the ‘pandemic in our mind’, and empirical data of past pandemics in the twentieth century were almost exclusively available objective references. Unfortunately, absence of pandemics for longer than 40 years forced us to focus on the historical data, and such a focus led us to offer the ‘worst case’ scenarios. Given that cases infected with highly pathogenic avian influenza (H5N1) are continuously reported, always keeping the worst case scenario in our mind is not necessarily bad. However, the foregoing worst case scenario was misleading for health policy in many circumstances from 2009 to 2010, and a specific lesson for modeling is that we have to constantly consider the point of variations in pandemic potential (e.g. consider wide variability in the severity). Third, there were two specific lessons for modeling in addition to biased emphasis on the worst case scenarios. Namely, (1) model structure and assumptions were not consistent with field observation, which is represented by difficulties in ascertaining all influenza cases and estimating the case fatality ratio [6, 7], and (2) real-time estimation was not incorporated into pandemic preparedness plans, and thus, essential data needs for such real-time exercises were not systematically considered prior to the pandemic. In future, similar misunderstanding could be avoided by addressing the above mentioned issues. Rather than regarding criticisms of Nougairede et al. [1] as an abandonment of prediction science, revised pandemic plans should be formulated by experimentally solving problems that have been seen from 2009.