Monitoring mortality

Mohammed and colleagues suggest that hospital standardised mortality ratios are prone to the “constant risk fallacy” and that the use of certain variables (the Charlson comorbidity index and emergency admission) for the case mix adjustment model is “unsafe.” 2 They focus on at least two mechanisms that might contribute to this constant risk fallacy: differential measurement error, and inconsistent proxy measures of risk. Certainly, measurement error, including poor coding, will have an impact on the ratios. However, it is the extent to which they are affected which is important. The paper gives a hypothetical example of how differential measurement error can distort a standardised mortality ratio. This is an extreme example based on artificial data. We calculated 2007-8 hospital standardised mortality ratios with andwithout adjustment for comorbidity (using the Charlson index) for each of the four hospitals in the paper and found that they changed by less than 3%. The authors argue that, because the trust with the highest mean Charlson score has the lowest mean length of stay, emergency readmission rate, and crude mortality rate of the four, the Charlson score does not reflect case mix but simply quality of coding. Further analysis reveals, however, that this higher mean Charlson score is due to 35% of their hospital standardised ratio admissions being for cancer, compared with between 9% and 25% for the other three hospitals. The Charlson score can only partially describe a hospital’s case mix, which explains why it may not always correlate well with outcome measures. The paper argues that the large variations in proportions of emergency/non-emergency patients with zero length of stay indicate that systematically different admission policies were being adopted across hospitals. We are not sure their data show this, as their data also show large variation across the three years in the same hospital. Their calculations (table 2) also seem to include day cases, which explains the low crude death rates and mean length of stay and affects the proportion of admissions that are emergencies. In any case, the variation in risk can be interpreted in two ways: either as bias or as real differences in risk between hospitals. Mid Staffordshire, one of the hospital trusts in the paper, has been severely criticised by the Healthcare Commission, which outlined serious concerns about the “appalling” emergency care in the trust. The report stated that there were deficiencies at “virtually every stage” in the care of people admitted as emergencies and concluded that the trust supplied insufficient evidence to support its claim that the apparent high mortality could be explained as a problem with the coding of data. Under competing interests Mohammed and colleagues state “None declared.” We note, however, that several members of the steering committee represent the hospitals included in the study, and these people may have potential conflicts of interest. The medical director and the information manager from Mid Staffordshire General Hospitals were both on the paper’s steering committee. In conclusion, we would agree that that the hospital standardised mortality ratio could potentially be affected by several factors, including data quality, admission thresholds, discharge strategies, and underlying levels of morbidity in the population, but we maintain that quality of care must also be considered as a contributing factor. When a hospital has a high standardised mortality ratio, then further investigation is merited to exclude or identify quality of care issues. Hospitals that have taken this approach in the US, UK, and other countries have gained a useful insight into mortality at their institution, and this has been associated with documented falls in mortality. 5 Such a reduction in mortality rates can only be good for patients.