How can Health Care Organizations be Reliably Compared?: Lessons From a National Survey of Patient Experience

BackgroundPatient experience is increasingly used to assess organizational performance, for example in public reporting or pay-for-performance schemes. Conventional approaches using 95% confidence intervals are commonly used to determine required survey samples or to report performance but these may result in unreliable organizational comparisons. MethodsWe analyzed data from 2.2 million patients who responded to the English 2009 General Practice Patient Survey, which included 45 patient experience questions nested within 6 different care domains (access, continuity of care, communication, anticipatory care planning, out-of-hours care, and overall care satisfaction). For each question, unadjusted and case-mix adjusted (for age, sex, and ethnicity) organization-level reliability, and intraclass correlation coefficients were calculated. ResultsMean responses per organization ranged from 23 to 256 for questions evaluating primary care practices, and from 1454 to 2758 for questions evaluating out-of-hours care organizations. Adjusted and unadjusted reliability values were similar. Twenty-six questions had excellent reliability (≥0.90). Seven nurse communication questions had very good reliability (≥0.85), but 3 anticipatory care planning questions had lower reliability (<0.70). Reliability was typically <0.70 for questions with <100 mean responses per practice, usually indicating questions which only a subset of patients were eligible to answer. Nine questions had both excellent reliability and high intraclass correlation coefficients (≥0.10) indicating both reliable measurement and substantial performance variability. ConclusionsHigh reliability is a necessary property of indicators used to compare health care organizations. Using the English General Practice Patient Survey as a case study, we show how reliability and intraclass correlation coefficients can be used to select measures to support robust organizational comparisons, and to design surveys that will both provide high-quality measurement and optimize survey costs.

[1]  Peter Bower,et al.  Reliability of patient responses in pay for performance schemes: analysis of national General Practitioner Patient Survey data in England , 2009, BMJ : British Medical Journal.

[2]  Martin Roland,et al.  Linking physicians' pay to the quality of care--a major experiment in the United kingdom. , 2004, The New England journal of medicine.

[3]  R. Grol,et al.  Patient evaluations of accessibility and co‐ordination in general practice in Europe , 2008, Health expectations : an international journal of public participation in health care and health policy.

[4]  John L Adams,et al.  Benchmarking physician performance: reliability of individual and composite measures. , 2008, The American journal of managed care.

[5]  Marc N Elliott,et al.  Development, Implementation, and Public Reporting of the HCAHPS Survey , 2010, Medical care research and review : MCRR.

[6]  John Adams,et al.  The Reliability of Provider Profiling: A Tutorial , 2009 .

[7]  M. Roland,et al.  Understanding why some ethnic minority patients evaluate medical care more negatively than white patients: a cross sectional analysis of a routine patient survey in English general practices , 2009, BMJ : British Medical Journal.

[8]  James C. Robinson,et al.  Quality-Based Payment for Medical Groups and Individual Physicians , 2009, Inquiry : a journal of medical care organization, provision and financing.

[9]  A. Zaslavsky,et al.  Variation in Patient-Reported Quality Among Health Care Organizations , 2002, Health care financing review.

[10]  Elizabeth A McGlynn,et al.  Physician cost profiling--reliability and risk of misclassification. , 2010, The New England journal of medicine.

[11]  R. Hays,et al.  Psychometric properties of the CAHPS (TM) 1.0 survey measures , 1998 .

[12]  A. Dickens,et al.  Assessing the professional performance of UK doctors: an evaluation of the utility of the General Medical Council patient and colleague questionnaires , 2008, Quality & Safety in Health Care.

[13]  Marc N. Elliott,et al.  The Effect of Performance-Based Financial Incentives on Improving Patient Care Experiences: A Statewide Evaluation , 2009, Journal of General Internal Medicine.

[14]  Peter Bower,et al.  The GP Patient Survey for use in primary care in the National Health Service in the UK – development and psychometric characteristics , 2009, BMC family practice.

[15]  Hong Chang,et al.  Measuring patients’ experiences with individual primary care physicians , 2007, Journal of General Internal Medicine.

[16]  D. Safran,et al.  Measuring Patients' Experiences With Individual Specialist Physicians and Their Practices , 2009, American journal of medical quality : the official journal of the American College of Medical Quality.

[17]  M. Elliott,et al.  Patterns of unit and item nonresponse in the CAHPS Hospital Survey. , 2005, Health services research.

[18]  Bruce Landon,et al.  Paying for quality: providers' incentives for quality improvement. , 2004, Health affairs.

[19]  Katrin Hambarsoomian,et al.  Do Hospitals Rank Differently on HCAHPS for Different Patient Subgroups? , 2010, Medical care research and review : MCRR.

[20]  R. Grol,et al.  Impact of national health care systems on patient evaluations of general practice in Europe. , 2004, Health policy.