Issues in the Design of Discrete Choice Experiments

The use of preference-elicitation tasks—in particular, discrete choice experiments (DCEs)—in health economics has grown significantly in recent decades [1]. The most widely used DCE approach asks respondents to consider a series of hypothetical choices between alternatives (here called choice tasks), and to specify which alternative they prefer. The use of choice tasks in other areas—especially psychology, transportation, marketing and agriculture—has a more established history. Health preference studies have been conducted for about as long [2, 3], but not to the same extent; the relatively late uptake in preference evidence in health is surprising in some regards as patient and population values concerning health have always been key components of a range of questions from health policy to clinical practice, and often cannot be directly observed, a problem exacerbated by the lack of a perfectly competitive market [4]. Though there is broad consensus on the value patients or the population place on health matters in decision-making, the methods for including them in a way that is reliable are debated. A significant issue with the conduct of such experiments is how best to construct the choice tasks to produce policyrelevant and reliable value estimates. If, for simplicity, the task has two alternatives (i.e., paired comparison [5]), which alternatives should be presented head-to-head? The risk of picking the wrong combinations are that values for some alternatives either cannot be estimated at all, or that they are estimated with an unacceptably low level of precision. This topic is of course not unique to health, and we should be cognisant of the work being conducted in other fields using similar methods. Conversely, we also believe that the design of health preference studies requires specific consideration to reflect the nature of the questions, and the provision of results, that best informs decision makers. This paper provides a summary of a panel discussion from a DCE design symposium at the International Academy of Health Preference Research (IAHPR) 2018 meeting in Hobart, Australia on September 28, 2018. This paper is one of a series of manuscripts reflecting on key issues discussed at IAHPR meetings [6, 7]. At the start of the symposium, each panellist presented lessons learned from their own experience: John Rose (JR) A unified theory of experimental design for stated choice studies Deborah J. Street (DJS) What can simulations tell us about DCE design performance? Marcel F. Jonker (MFJ) Individually adaptive D-efficient DCE designs Paul Hansen (PH) The PAPRIKA method: A full factorial DCE involving pairwise rankings of all possible attribute combinations Benjamin M. Craig (BMC) Experience-based methods for DCE designs This article is part of the topical collection on ‘‘From the International Academy of Health Preference Research’’.

[1]  Edward L. Thorndike,et al.  Valuations of Certain Pains, Deprivations, and Frustrations , 1937 .

[2]  Deborah J. Street,et al.  The Construction of Optimal Stated Choice Experiments , 2007 .

[3]  L. Thurstone The method of paired comparisons for social values , 1927 .

[4]  Maarten J. IJzerman,et al.  Multiple Criteria Decision Analysis for Health Care Decision Making--An Introduction: Report 1 of the ISPOR MCDA Emerging Good Practices Task Force. , 2016, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[5]  Arne Risa Hole,et al.  Response Patterns in Health State Valuation Using Endogenous Attribute Attendance and Latent Class Analysis. , 2016, Health economics.

[6]  K. Arrow Uncertainty and the welfare economics of medical care. 1963. , 2004, Bulletin of the World Health Organization.

[7]  Catharina G. M. Groothuis-Oudshoorn,et al.  Key Issues and Potential Solutions for Understanding Healthcare Preference Heterogeneity Free from Patient-Level Scale Confounds , 2018, The Patient - Patient-Centered Outcomes Research.

[8]  John F P Bridges,et al.  Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability? , 2017, Expert review of pharmacoeconomics & outcomes research.

[9]  Rebecca Noel,et al.  Symposium Title: Preference Evidence for Regulatory Decisions , 2018, The Patient - Patient-Centered Outcomes Research.

[10]  Jordan J. Louviere,et al.  Designing Discrete Choice Experiments: Do Optimal Designs Come at a Price? , 2008 .

[11]  Joanna Coast,et al.  Using qualitative methods for attribute development for discrete choice experiments: issues and recommendations. , 2012, Health economics.

[12]  Deborah Marshall,et al.  Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force. , 2013, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[13]  Bas Donkers,et al.  Effect of Level Overlap and Color Coding on Attribute Non-Attendance in Discrete Choice Experiments. , 2017, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[14]  Deborah J Street,et al.  One Method, Many Methodological Choices: A Structured Review of Discrete-Choice Experiments for Health State Valuation , 2018, PharmacoEconomics.

[15]  R Smith,et al.  Preference for subcutaneous or intravenous administration of rituximab among patients with untreated CD20+ diffuse large B-cell lymphoma or follicular lymphoma: results from a prospective, randomized, open-label, crossover study (PrefMab) , 2016, Annals of oncology : official journal of the European Society for Medical Oncology.

[16]  Rainer Schwabe,et al.  Design for Discrete Choice Experiments , 2015 .

[17]  H. A. David,et al.  The Method of Paired Comparisons (2nd ed.). , 1989 .

[18]  Julie Ratcliffe,et al.  Measuring and valuing health benefits for economic evaluation in adolescence: an assessment of the practicality and validity of the child health utility 9D in the Australian adolescent population. , 2012, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[19]  P. Moran On the method of paired comparisons. , 1947, Biometrika.

[20]  Anthony C. Atkinson,et al.  Optimum Experimental Designs , 1992 .

[21]  P. Green,et al.  Thirty Years of Conjoint Analysis: Reflections and Prospects , 2001 .

[22]  Denzil G. Fiebig,et al.  The Generalized Multinomial Logit Model: Accounting for Scale and Coefficient Heterogeneity , 2010, Mark. Sci..

[23]  M. T. King,et al.  Using a discrete choice experiment to value the QLU-C10D: feasibility and sensitivity to presentation format , 2016, Quality of Life Research.

[24]  R. D. Cook,et al.  A Comparison of Algorithms for Constructing Exact D-Optimal Designs , 1980 .

[25]  John M. Rose,et al.  Design Efficiency for Non-Market Valuation with Choice Modelling: How to Measure it, What to Report and Why , 2008 .

[26]  R. K. Meyer,et al.  The Coordinate-Exchange Algorithm for Constructing Exact Optimal Experimental Designs , 1995 .

[27]  P. Hansen,et al.  A new method for scoring additive multi‐attribute value models using pairwise rankings of alternatives , 2008 .

[28]  Bas Donkers,et al.  Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide , 2015, The Patient - Patient-Centered Outcomes Research.

[29]  D. Street,et al.  The Construction of Optimal Stated Choice Experiments: Theory and Methods by STREET, D. J. and BURGESS, L. , 2007 .

[30]  Mandy Ryan,et al.  Discrete choice experiments in health economics: a review of the literature. , 2012, Health economics.

[31]  Arne Risa Hole,et al.  Accounting for Attribute-Level Non-Attendance in a Health Choice Experiment: Does it Matter? , 2015, Health economics.