Assessing Quality Using Administrative Data

State and regional efforts to assess the quality of health care often start with administrative data, which are a by-product of administering health services, enrolling members into health insurance plans, and reimbursing for health care services. By definition, administrative data were never intended for use in quality assessment. As a result, clinicians often dismiss these data, arguing that the information cannot be trusted. Nonetheless, with detailed clinical information buried deep within paper medical records and thus expensive to extract, administrative data possess important virtues. They are readily available; are inexpensive to acquire; are computer readable; and typically encompass entire regional populations or large, well-defined subpopulations. In the health policy community, hopes for administrative data were initially high. Beginning in the early 1970s, administrative data quantified startling practice variations across small geographic areas [1, 2]. In the 1980s, administrative databases became a mainstay of research on the outcomes of care [3, 4]. In 1989, legislation that created the Agency for Health Care Policy and Research (AHCPR) stipulated the use of claims data in determining the outcomes, effectiveness, and appropriateness of different therapies (Public Law 101-239, Section 1142[c]). Five years later, however, the Office of Technology Assessment offered a stinging appraisal: Contrary to the expectations expressed in the legislation establishing AHCPR administrative databases generally have not proved useful in answering questions about the comparative effectiveness of alternative medical treatments [5]. The costs of acquiring detailed clinical information, however, often force concessions in the real world. For example, in 1990, California's Assembly debated new requirements for reporting clinical data to evaluate hospital quality [6]. When estimated annual costs for data collection were $61 million, fiscal reality intervened. The legislature mandated the creation of quality measures that used California's existing administrative database. Thus, widespread quality assessment typically demands a tradeoff-the credibility of clinical data versus the expense and feasibility of data collection. Can administrative data produce useful judgments about the quality of health care? Defining Quality What is quality? For decades, physicians protested that defining health care quality was impossible. Today, however, experts claim that rigorous quality measures can systematically assess care across groups of patients [7, 8]. Nonetheless, consensus about specific methods for measuring quality remains elusive. Different conceptual frameworks for defining quality stress different dimensions of health care delivery. Donabedian's classic framework [9] delineated three dimensions: 1) structure, or the characteristics of a health care setting [for example, the physical plant, available technology, staffing patterns, and credentialing procedures]; 2) process, or what is done to patients; and 3) outcomes, or how patients do after health care interventions. The three dimensions are intertwined, but their relative utility depends on context. Few links between processes and outcomes are backed by solid evidence from well-controlled studies, and outcomes that are not linked to specific medical practices provide little guidance for developing quality-improvement strategies [10]. In addition, comparing outcomes across groups frequently requires adjustment for patient risk and the recognition that some patients are sicker than others [11]. Other important dimensions emerge when a process splits into two components: technical quality and interpersonal quality (for example, communication, caring, and respect for patient preferences). Another process question involves the appropriateness of services: errors of omission (failing to do necessary things) and errors of commission (doing unnecessary things). Both errors can be related to another important dimension of quality: access to health care. In errors of omission, access may be impeded; in errors of commission, access may be too easy or inducements to perform procedures too great. In today's environment, determining who (or what) is accountable for observed quality is as important as measuring quality. This requires defining a unit of analysis: quality for whom? Potential units of analysis include individual patients, patients grouped by providers, or populations defined by region or an important characteristic (for example, the insurer or patient age). Methods for measuring quality across populations differ from those that scrutinize quality for individual patients. Given these multidimensional perspectives, a single response may be insufficient to judge whether administrative data can assess health care quality. As discussed in the following sections, administrative data may capture some dimensions of quality and units of observation better than others. Content of Administrative Databases The three major producers of administrative databases are the federal government (including the Health Care Financing Administration [HCFA], which administers Medicare and oversees Medicaid; the Department of Defense; and the Department of Veterans Affairs), state governments, and private insurers [3, 4, 12-19]. Although administrative files initially concentrated on information from acute care hospitals, information is increasingly compiled from outpatient, long-term care, home health, and hospice programs. Most administrative files explicitly aim to minimize data collection. Their source documents (for example, claim forms) contain the minimum amount of information required to perform the relevant administrative function (for example, to verify and pay the claims). In this article, I focus on hospital-derived data (such as that obtained from discharge abstracts), but many of the issues examined apply to other care settings. Their clinical content delimits the potential of databases to measure the quality of health care. Administrative sources always contain routine demographic data (Table 1). Additional clinical information includes diagnosis codes (based on the International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM]) and procedure codes. Hospitals report procedures using the ICD-9-CM codes, but physicians generally use codes from the American Medical Association's Current Procedural Terminology. The two coding systems do not readily link, hindering comparisons between hospital- and physician-generated data. Table 1. Contents of the Uniform Hospital Discharge Data Set The ICD-9-CM contains codes for many conditions that are technically not diseases (Table 2). Given this diversity, creatively combining ICD-9-CM codes produces snapshots of clinical scenarios. For example, data selected from the 1994 discharge abstract of a man in a California hospital (Table 3) suggest the following scenario: A 62-year-old white man with a history of chronic renal failure that required hemodialysis and type 2 diabetes with retinopathy was admitted with the Mallory-Weis syndrome. Blood loss from an esophageal tear may have caused orthostatic hypotension. During the 9-day hospitalization, the patient was also treated for Klebsiella pneumonia. Table 2. Examples of Information Contained in ICD-9-CM Codes* Table 3. Discharge Abstract Information for a Patient Admitted to a California Hospital in 1994* This diversity of ICD-9-CM codes is used by administrative data-based severity measures [20-22] aiming to compare risk-adjusted patient outcomes across hospitals. For example, Disease Staging rates patients with pneumonia as having more severe disease if the discharge abstract also contains codes for sepsis. Attributes of Administrative Data Administrative files contain limited clinical insight to inform quality assessment. Administrative data cannot elucidate the interpersonal quality of care, evaluate the technical quality of processes of care, determine most errors of omission or commission, or assess the appropriateness of care. Some exceptions to these negative judgments do exist. For example, with longitudinal person-level data, one could detect failures to immunize children (errors of omission)-if all immunizations were coded properly, which is unlikely. Certain ICD-9-CM procedure codes prompt concerns about technical quality (for example, 39.41, control of hemorrhage after vascular surgery, and 54.12, reopening of recent laparotomy site), but the specificity of the codes is suspect. Nonetheless, administrative data are widely used to produce hospital report cards that primarily compare in-hospital mortality rates. The mechanics are easy. For example, in Massachusetts, reporters for The Boston Globe purchased the state's database of hospital discharge abstracts, conducted analyses, and published a report card on hospital mortality. The report card was explicitly intended to provide insight into the quality of health care [23]. Are quality assessments based on administrative data valid? As Donabedian observed [9], a major aspect of validity has to do with the accuracy of the data. The Institute of Medicine's Committee on Regional Health Data Networks made the reliability and validity of data an absolute requirement that had to be satisfied before public dissemination of derived quality measures [12]: The public interest is materially served when society is given as much information on costs, quality, and value for health care dollar expended as can be given accurately . Public disclosure is acceptable only when it: (1) involves information and analytic results that come from studies that have been well conducted, (2) is based on data that can be shown to be reliable and valid for the purposes intended, and (3) is accompanied by appropriate educational material. What, therefore, are the important attributes of administrative data? Data Quality Like quality of car

[1]  R. C. Bradbury,et al.  Predicted probabilities of hospital death as a measure of admission severity of illness. , 1993, Inquiry : a journal of medical care organization, provision and financing.

[2]  A. Lawthers,et al.  Variation in office-based quality. A claims-based profile of care provided to Medicare patients with diabetes. , 1995, JAMA.

[3]  F. Waterstraat,et al.  Diagnostic coding quality and its impact on healthcare reimbursement: research prospectives. , 1990, Journal.

[4]  L I Iezzoni,et al.  Predicting Who Dies Depends on How Severity Is Measured: Implications for Evaluating Patient Outcomes , 1995, Annals of Internal Medicine.

[5]  D. K. Williams,et al.  Assessing hospital-associated deaths from discharge data. The role of length of stay and comorbidities. , 1988, JAMA.

[6]  T H Payne,et al.  How useful is the UMLS metathesaurus in developing a controlled vocabulary for an automated problem list? , 1993, Proceedings. Symposium on Computer Applications in Medical Care.

[7]  K. Lohr,et al.  Health Data in the Information Age , 1994 .

[8]  L. Iezzoni,et al.  Judging hospitals by severity-adjusted mortality rates: the case of CABG surgery. , 1996, Inquiry : a journal of medical care organization, provision and financing.

[9]  E. Hannan,et al.  Using Medicare claims data to assess provider quality for CABG surgery: does it work well enough? , 1997, Health services research.

[10]  W. M. Krushat,et al.  Medicare reimbursement accuracy under the prospective payment system, 1985 to 1988. , 1992, JAMA.

[11]  A Wajda,et al.  Record linkage strategies, outpatient procedures, and administrative data. , 1996, Medical care.

[12]  R. Brook,et al.  The effect of alternative case-mix adjustments on mortality differences between municipal and voluntary hospitals in New York City. , 1994, Health services research.

[13]  E. Keeler,et al.  Costing Medical Care: Using Medicare Administrative Data , 1994, Medical care.

[14]  D W Simborg,et al.  DRG creep: a new hospital-acquired disease. , 1981, The New England journal of medicine.

[15]  D. Mark,et al.  Bias in the coding of hospital discharge data and its implications for quality assessment. , 1994, Medical care.

[16]  T H Payne,et al.  How well does ICD9 represent phrases used in the medical record problem list? , 1992, Proceedings. Symposium on Computer Applications in Medical Care.

[17]  E. Fisher,et al.  Comorbidities, complications, and coding bias. Does the number of diagnosis codes matter in predicting in-hospital mortality? , 1992, JAMA.

[18]  J. Cimino Review Paper: Coding Systems in Health Care , 1995, Methods of Information in Medicine.

[19]  W. M. Krushat,et al.  Accuracy of diagnostic coding for Medicare patients under the prospective-payment system. , 1988, The New England journal of medicine.

[20]  Epstein Mh Guest alliance: uses of state-level hospital discharge databases. , 1992 .

[21]  E. McGlynn,et al.  Health system reform and quality. , 1996, JAMA.

[22]  J. Avorn,et al.  Medicaid data as a resource for epidemiologic studies: strengths and limitations. , 1989, Journal of clinical epidemiology.

[23]  J S Gonnella,et al.  Staging of disease. A case-mix measurement. , 1984, JAMA.

[24]  L. Iezzoni,et al.  Severity measurement methods and judging hospital death rates for pneumonia. , 1996, Medical care.

[25]  H. Luft,et al.  The California Hospital Outcomes Project: using administrative data to compare hospital performance. , 1995, The Joint Commission journal on quality improvement.

[26]  M. Meistrell,et al.  Adopting a corporate perspective on databases. Improving support for research and decision making. , 1996, Medical care.

[27]  L. Iezzoni Risk Adjustment for Measuring Healthcare Outcomes , 1994 .

[28]  W. W. Young,et al.  PMC Patient Severity Scale: derivation and validation. , 1994, Health services research.

[29]  A. Gittelsohn,et al.  Small Area Variations in Health Care Delivery , 1973, Science.

[30]  R H Brook,et al.  Quality of health care. Part 2: measuring quality of care. , 1997, The New England journal of medicine.

[31]  J. Iglehart,et al.  The National Committee for Quality Assurance. , 1996, The New England journal of medicine.

[32]  L. Christman Physician Profiling and Risk Adjustment , 1996 .

[33]  K N Lohr,et al.  Outcome measurement: concepts and questions. , 1988, Inquiry : a journal of medical care organization, provision and financing.

[34]  E L Hannan,et al.  Clinical Versus Administrative Data Bases for CABG Surgery: Does it Matter , 1992, Medical care.

[35]  Jonathan P. Weiner,et al.  Agreement Between Physicians' Office Records and Medicare Part B Claims Data , 1995, Health care financing review.

[36]  Second Report of the California Hospital Outcomes Project (1996): Acute Myocardial Infarction Volume Two: Technical Appendix-chapter014 , 1996 .

[37]  M. Koska Are severity data an effective consumer tool? , 1989, Hospitals.

[38]  L. Muhlbaier,et al.  Using Medicare Claims for Outcomes Research , 1994, Medical care.

[39]  L I Iezzoni,et al.  Using severity-adjusted stroke mortality rates to judge hospitals. , 1995, International journal for quality in health care : journal of the International Society for Quality in Health Care.

[40]  Leighton Ku,et al.  Deciphering Medicaid data: Issues and needs , 1990, Health care financing review.

[41]  L. McMahon,et al.  Can Medicare prospective payment survive the ICD-9-CM disease classification system? , 1986, Annals of internal medicine.

[42]  A. Donabedian The definition of quality and approaches to its assessment , 1980 .