Predictions of Hospital Mortality Rates: A Comparison of Data Sources

Comparative monitoring of hospital mortality rates is being increasingly used to evaluate and improve the quality of health care [1-5]. Essential to this process is accurate adjustment for differences in severity of illness and other risk factors among patients served by different caregivers. Thus, monitoring systems generally use risk-adjustment models that predict, as of the time of hospital admission, each patient's probability of dying if average care is given. Each hospital's results are then compared with the norm by determining whether a statistically significant difference exists between observed and predicted mortality rates. The power of a risk-adjustment model to predict an adverse outcome depends on the extent and accuracy of the data on each patient's clinical condition when care begins. Such information has traditionally been obtained electronically from patients' hospital bills (administrative data) or been abstracted laboriously from written medical records (clinical data). Administrative data sets, although widely available and inexpensive, have been criticized as lacking the clinical detail necessary to permit adequate adjustment for each patient's underlying medical condition [6, 7]. Probably the best-studied and most controversial risk-adjustment models using administrative data were developed by the Health Care Financing Administration to evaluate hospital mortality rates among Medicare beneficiaries [8]. Several commercial severity systems (for example, the Acuity Index Method, All-Patient Refined Diagnosis-Related Groups, some versions of Clinical Disease Staging, Patient Management Categories, and Patient Risk-Adjusted Groups) also use only administrative data to adjust hospital mortality rates for risk [9]. In comparisons of mortality predictions obtained by using the administrative data of the Health Care Financing Administration with mortality predictions obtained by using administrative and clinical data abstracted from medical records, clinical data were shown to enhance predictive capability [10]. However, the cost and effort of acquiring clinical data have led such states as California and Florida to monitor hospitals by using administrative data alone [11, 12]. Iezzoni and colleagues [13] recently compared the abilities of two models that used clinical data and two models that used only administrative data to estimate the probabilities of death in patients who had acute myocardial infarction. They found that measures based on discharge abstracts yielded better mortality predictions than did measures based on clinical data. This is not surprising, because a discharge abstract (that is, an abstract of diagnoses and procedures coded for billing by using International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM]) contains codes for all diagnoses treated during a particular hospitalization, regardless of when the symptoms appeared. A risk-adjustment model that includes hospital-acquired complications that usually precede death will almost invariably predict death better. However, inclusion of these diagnoses undermines the goal of adjusting for patients' conditions when care begins. A risk-adjustment model for disease severity at hospital admission that includes potentially fatal hospital-acquired complications such as cardiac arrest, shock, and hypotension masks inadequate care by increasing the measured risk of patients whose health deteriorates during hospitalization. Hannan and colleagues [14] recently suggested an alternative to basing risk adjustment on either administrative data alone or administrative data plus extensive sets of abstracted clinical data. These researchers studied mortality rates after coronary artery bypass graft surgery and added three clinical data elements (ejection fraction, clinically significant left main coronary artery stenosis, and previous open heart surgery) to administrative data. The resulting model predicted death nearly as well as did models derived from clinical data that were collected prospectively. We explore whether laboratory data also improve the accuracy of risk-adjustment models that do not involve expensive data abstraction. We compare the accuracy of mortality predictions that use administrative data alone; those that use administrative data plus laboratory values; and those that use the combination of administrative, laboratory, and clinical data. We also examine the likelihood that particular ICD-9-CM codes represent inpatient complications instead of comorbid conditions present at hospital admission. Methods Collection and Classification of Data Elements Inpatient data for risk-adjustment models were obtained between January 1991 and December 1992 from 30 acute care hospitals in Cleveland, Ohio [15]. The 30 participating hospitals range in size from 82 to 899 beds. Twenty-four hospitals are private nonprofit institutions, 5 are affiliated with churches, and 1 receives public (county) support. Nine hospitals have medical school affiliations, 11 have accredited residency programs (9 medical and 2 osteopathic programs), 7 are certified trauma centers, and 10 perform open heart surgery [16]. Data were obtained for 46 769 adults (age 18 years) who were consecutively discharged from the 30 participating hospitals after medical treatment for acute myocardial infarction (6088 patients; mean age, 68.7 years), cerebrovascular accident (9061 patients; mean age, 72.9 years), congestive heart failure (18 864 patients; mean age, 73.6 years), or pneumonia (12 756 patients; mean age, 68.9 years). These medical conditions had been selected previously by the participating hospitals and their corporate customers as collectively representing the highest proportion of admissions for acute care, a major share of hospitalization costs, and the highest proportion of in-hospital deaths [15]. Patient eligibility was determined from specific ICD-9-CM principal diagnosis codes. These codes did not include patients who had surgery. (Codes used to identify patients with each medical condition are listed in Appendix A.) Data elements were abstracted from patients' records by medical records personnel trained in data abstraction and were designated as administrative, laboratory, or clinical. Administrative data elements comprised demographic characteristics, sources of admission (that is, the route and external source), and diagnostic information derived from ICD-9-CM diagnosis codes. To obtain diagnostic information, the ICD-9-CM codes entered by the hospitals' professional coders into the medical record were transcribed by the data abstractors. These are the same codes that were submitted by the hospitals for reimbursement, and their accuracy is subject to regular audit by peer review organizations. Laboratory data elements included blood chemistry, hematologic, and arterial blood gas variables. Clinical data elements were the findings seen at hospital admission that are generally available only in patients' charts (for example, chest radiographic and electrocardiographic findings, mental status, and vital signs). Development of Risk-Adjustment Models For each diagnosis, stepwise logistic regression [17] was used to develop administrative, laboratory, and clinical risk-adjustment models. Because comorbid conditions are not always distinguishable from complications, both restricted and unrestricted administrative models were considered. All variables derived from administrative data were eligible for inclusion in the unrestricted administrative models. The restricted administrative models included only the variables (such as diabetes mellitus, cancer, and chronic renal failure) that were unlikely to be complications of care. Only administrative data elements were included in the two types of administrative models. The laboratory and clinical models contained both restricted administrative data and laboratory data. (Appendix B lists all data elements found in one or more of the final risk-adjustment models.) Only variables that showed a univariate association with death were considered for inclusion in the stepwise logistic models. Each continuous variable was screened to determine whether its relation to mortality was approximately linear. If so, the variable was treated as continuous within a specified range. If the association was nonlinear, the variable was represented by a set of dichotomous variables that corresponded to different ranges of values. We excluded patients for whom any vital sign was missing. Routine laboratory results (blood urea nitrogen, creatinine, glucose, and electrolyte levels and complete blood counts) were missing in 2.3% to 4.0% of the patients included. More specialized blood chemistry results (albumin, calcium, aspartate aminotransferase, lactate dehydrogenase, bilirubin, and alkaline phosphatase levels) were missing in 14.7% to 20.2% of included patients. Measurements of blood gases, creatinine phosphokinase levels, prothrombin time, and partial thromboplastin time were missing in 35.9% to 50.9% of included patients. Specialized laboratory tests are often omitted when physicians believe that the results of these tests would be normal or redundant or because critically ill patients have died. Differences among hospitals in the accuracy and completeness of data recording were assessed and were found to be very small. To account for missing data on specific tests in specific conditions, mortality rates associated with ranges of observed laboratory values were compared with mortality rates for patients for whom data were missing; patients who did not have a given value or values received the values associated with the most similar mortality rate. For example, the mortality rate for the 12.5% of patients with acute myocardial infarction in whom albumin levels were not documented was 21.5%, whereas the overall mortality rate was 14.1%. Mortality rates in patients with acute myocardial infarction whose al

[1]  S Greenland,et al.  A critical look at methods for handling missing covariates in epidemiologic regression analyses. , 1995, American journal of epidemiology.

[2]  C D Naylor,et al.  Pitfalls in nonrandomized outcomes studies. The case of incidental appendectomy with open cholecystectomy. , 1995, JAMA.

[3]  L I Iezzoni,et al.  Predicting Who Dies Depends on How Severity Is Measured: Implications for Evaluating Patient Outcomes , 1995, Annals of Internal Medicine.

[4]  G. Rosenthal,et al.  Cleveland health quality choice: a model for collaborative community-based outcomes assessment. , 1994, The Joint Commission journal on quality improvement.

[5]  E L Hannan,et al.  Improving the outcomes of coronary artery bypass surgery in New York State. , 1994, JAMA.

[6]  J. Jollis,et al.  A comparison of administrative versus clinical data: coronary artery bypass surgery as an example. Ischemic Heart Disease Patient Outcomes Research Team. , 1994, Journal of clinical epidemiology.

[7]  L. Sheiner,et al.  Case-mix adjustment using objective measures of severity: the case for laboratory data. , 1994, Health services research.

[8]  E. DeLong,et al.  Discordance of Databases Designed for Claims Payment versus Clinical Information Systems: Implications for Outcomes Research , 1993, Annals of Internal Medicine.

[9]  R. C. Bradbury,et al.  Predicted probabilities of hospital death as a measure of admission severity of illness. , 1993, Inquiry : a journal of medical care organization, provision and financing.

[10]  E L Hannan,et al.  Clinical Versus Administrative Data Bases for CABG Surgery: Does it Matter , 1992, Medical care.

[11]  A. Rimm,et al.  Evaluation of the HCFA model for the analysis of mortality following hospitalization. , 1992, Health services research.

[12]  M. Pine,et al.  Measuring and Managing Health Care Quality: Procedures, Techniques, and Protocols , 1992 .

[13]  M. Pine,et al.  Using Clinical Variables to Estimate the Risk of Patient Mortality , 1991, Medical care.

[14]  E J Topol,et al.  Coronary morphologic and clinical determinants of procedural outcome with angioplasty for multivessel coronary disease. Implications for patient selection. Multivessel Angioplasty Prognosis Study Group. , 1990, Circulation.

[15]  F Alemi,et al.  Predicting In-Hospital Survival of Myocardial Infarction: A Comparative Study of Various Severity Measures , 1990, Medical care.

[16]  W L Roper,et al.  Effectiveness in health care. An initiative to evaluate and improve medical practice. , 1988, The New England journal of medicine.

[17]  P. Ellwood,et al.  Shattuck lecture--outcomes management. A technology of patient experience. , 1988, The New England journal of medicine.

[18]  Mark S. Blumberg,et al.  Risk Adjusting Health Care Outcomes: A Methodologic Review , 1986, Medical care review.

[19]  J. Hanley,et al.  A method of comparing the areas under receiver operating characteristic curves derived from the same cases. , 1983, Radiology.

[20]  J. Hanley,et al.  The meaning and use of the area under a receiver operating characteristic (ROC) curve. , 1982, Radiology.

[21]  D. Hosmer,et al.  A review of goodness of fit statistics for use in the development of logistic regression models. , 1982, American journal of epidemiology.