A method for screening the quality of hospital care using administrative data: preliminary validation results.

Applying a computerized algorithm to administrative data to help assess the quality of hospital care is intriguing. As Iezzoni and colleagues point out, there are major differences of opinion as to the worth of such efforts. This article significantly advances the state of the art in using administrative data to screen for potential quality-of-care problems. In addition, this work on identifying complications of care goes well beyond the emphasis of many government organizations on hospital mortality rates. One question, however, not raised in the paper is: What is a practical upper limit to the sensitivity and specificity in comparing computerized screen results with the consensus judgments of a group of independent physicians? Advanced statistical techniques (such as bootstrapping) might be used to estimate the stability of consensus judgments by physician groups. When the judgments of two groups of physicians are compared with each other, the resulting sensitivity and specificity will not be .99! In addition, more training of members of the physician panels would probably have increased interrater reliability. While acknowledging this problem, the researchers' detailed analysis of the panel results is intriguing and represents a model for such studies. It is hoped that the authors will follow up on the avenues opened here. Furthermore, what degree of accuracy is necessary to identify facilities with higher-than-expected rates of complications? The authors discuss problems involved in using administrative data to target hospitals and departments for more costly in-depth reviews of quality. It is hoped that the promising findings that are reported here will be validated in other studies. Certainly their algorithms should find a ready audience in insurers and hospitals willing to try them out. Finally, should we expect additional research to lead to improvement in the authors' algorithms? I believe the algorithms will prove difficult to improve upon; but perhaps we should not worry about this. At some point, however, the cost of trying to identify and correct quality problems in "minimally outlier" hospitals will exceed the benefits, particularly given alternative uses for the funds. Might we now be close the the "flat of the curve" in the development of such systems for identification of quality problems? This issue should be discussed much further in future studies.

[1]  K L Posner,et al.  Effect of outcome on physician judgments of appropriateness of care. , 1991, JAMA.

[2]  W. Knaus,et al.  Predicting and evaluating patient outcomes. , 1988, Annals of internal medicine.

[3]  E. Hannan,et al.  A methodology for targeting hospital cases for quality of care record reviews. , 1989, American journal of public health.

[4]  Using severity data to measure quality. , 1988, Business and health.

[5]  T. Brennan,et al.  INCIDENCE OF ADVERSE EVENTS AND NEGLIGENCE IN HOSPITALIZED PATIENTS , 2008 .

[6]  E. Fisher,et al.  The accuracy of Medicare's hospital claims data: progress has been made, but problems remain. , 1992, American journal of public health.

[7]  A A Rimm,et al.  Hospital characteristics and mortality rates. , 1989, The New England journal of medicine.

[8]  D Draper,et al.  Changes in quality of care for five diseases measured by implicit review, 1981 to 1986. , 1990, JAMA.

[9]  G. Coffman,et al.  The Utility of Severity of Illness Information in Assessing the Quality of Hospital Care: The Role of the Clinical Trajectory , 1992, Medical care.

[10]  R H Brook,et al.  Preventable deaths: who, how often, and why? , 1988, Annals of internal medicine.

[11]  R H Brook,et al.  Explaining variations in hospital death rates. Randomness, severity of illness, quality of care. , 1990, JAMA.

[12]  S. Jencks,et al.  The health care quality improvement initiative. A new approach to quality assurance in Medicare. , 1992, Journal of the American Medical Association (JAMA).

[13]  N. Roos,et al.  Monitoring adverse outcomes of surgery using administrative data , 1987, Health care financing review.

[14]  L L Roos,et al.  Using computers to identify complications after surgery. , 1985, American journal of public health.

[15]  R H Brook,et al.  Quality-of-care assessment: choosing a method for peer review. , 1973, The New England journal of medicine.

[16]  N. Powe,et al.  Development of clinical and economic prognoses from Medicare claims data. , 1990, JAMA.

[17]  N. Roos,et al.  Centralization, Certification, and Monitoring: Readmissions and Complications After Surgery , 1986, Medical care.

[18]  W. M. Krushat,et al.  Medicare reimbursement accuracy under the prospective payment system, 1985 to 1988. , 1992, JAMA.

[19]  R H Brook,et al.  Watching the doctor-watchers. How well do peer review organization methods detect hospital care quality problems? , 1992, JAMA.

[20]  K. Lohr,et al.  Medicare: a strategy for quality assurance, V: Quality of care in a changing health care environment. , 1992, QRB. Quality review bulletin.

[21]  J. Iglehart Competition and the Pursuit of Quality: A Conversation with Walter McClure , 1988 .

[22]  L L Roos,et al.  Using Administrative Data to Predict Important Health Outcomes: Entry to Hospital, Nursing Home, and Death , 1988, Medical care.

[23]  P. Diehr,et al.  The use of large data bases in health care studies. , 1987, Annual review of public health.

[24]  B. Steinwald,et al.  Hospital case-mix change: sicker patients or DRG creep? , 1989, Health affairs.

[25]  J. Eisenberg,et al.  Physician Utilization: The State of Research About Physicians’ Practice Patterns , 1985, Medical care.

[26]  D W Simborg,et al.  DRG creep: a new hospital-acquired disease. , 1981, The New England journal of medicine.

[27]  L. McMahon,et al.  Can Medicare prospective payment survive the ICD-9-CM disease classification system? , 1986, Annals of internal medicine.

[28]  N. Roos,et al.  Use of claims data systems to evaluate health care outcomes. Mortality and reoperation following prostatectomy. , 1987, JAMA.

[29]  E. Fisher,et al.  Comorbidities, complications, and coding bias. Does the number of diagnosis codes matter in predicting in-hospital mortality? , 1992, JAMA.

[30]  D. K. Williams,et al.  Assessing hospital-associated deaths from discharge data. The role of length of stay and comorbidities. , 1988, JAMA.

[31]  E B Perrin,et al.  Measuring the quality of medical care. A clinical method. , 1976, The New England journal of medicine.

[32]  S Woolhandler,et al.  The deteriorating administrative efficiency of the U.S. health care system. , 1991, The New England journal of medicine.

[33]  K. Lohr,et al.  Monitoring quality of care in the Medicare program. Two proposed systems. , 1987, JAMA.

[34]  Hsia Dc,et al.  Accuracy of Diagnostic Coding for Medicare Patients under the Prospective-Payment System , 1988 .

[35]  A Donabedian,et al.  Criteria and standards for quality assessment and monitoring. , 1986, QRB. Quality review bulletin.

[36]  R H Brook,et al.  Variations in the use of medical and surgical services by the Medicare population. , 1986, The New England journal of medicine.

[37]  Jacobs Cm,et al.  MEDISGRPS: a clinically based approach to classifying hospital patients at admission. , 1985 .

[38]  G. Coffman,et al.  Admission and mid-stay MedisGroups scores as predictors of death within 30 days of hospital admission. , 1991, American journal of public health.