Multimedia Appendix 1. Characteristics and Interpretations of the Codebook Used To

Background Increasingly, Web-based health applications are developed for the prevention and management of chronic diseases. However, their reach and utilization is often disappointing. Qualitative evaluations post-implementation can be used to inform the optimization process and ultimately enhance their adoption. In current practice, such evaluations are mainly performed with end-user surveys. However, a review approach by experts in a focus group may be easier to administer and might provide similar results. Objective The aim of this study was to assess whether industrial design engineers in a focus group would address the same issues as end users in a Web-based survey when evaluating a commercial Web-based health risk assessment (HRA) with tailored feedback. Methods Seven Dutch companies used the HRA as part of their corporate health management strategy. Employees using the HRA (N=2289) and 10 independent industrial designers were invited to participate in the study. The HRA consisted of four components: (1) an electronic health questionnaire, (2) biometric measurements, (3) laboratory evaluation, and (4) individually tailored feedback generated by decision support software. After participating in the HRA as end users, both end users and designers evaluated the program. End users completed an evaluation questionnaire that included a free-text field. Designers participated in a focus group discussion. Constructs from user satisfaction and technology acceptance theories were used to categorize and compare the remarks from both evaluations. Results We assessed and qualitatively analyzed 294 remarks of 189 end users and 337 remarks of 6 industrial designers, pertaining to 295 issues in total. Of those, 137 issues were addressed in the end-user survey and 148 issues in the designer focus group. Only 7.3% (10/137) of the issues addressed in the survey were also addressed in the focus group. End users made more remarks about the usefulness of the HRA and prior expectations that were not met. Designers made more remarks about how the information was presented to end users, quality of the feedback provided by the HRA, recommendations on the marketing and on how to create more unity in the design of the HRA, and on how to improve the HRA based on these issues. Conclusions End-user surveys should not be substituted for expert focus groups. Issues identified by end users in the survey and designers in the focus group differed considerably, and the focus group produced a lot of new issues. The issues addressed in the focus group often focused on different aspects of user satisfaction and technology acceptance than those addressed by the survey participants; when they did focus on the same aspects, then the nature of issues differed considerably in content.

[1]  Leo Lentz,et al.  Focus: Design and Evaluation of a Software Tool for Collecting Reader Feedback , 2001 .

[2]  Jakob Nielsen,et al.  Heuristic evaluation of user interfaces , 1990, CHI '90.

[3]  Ersen B. Colkesen,et al.  Evaluation of End-User Satisfaction Among Employees Participating in a Web-based Health Risk Assessment With Tailored Feedback , 2012, Journal of medical Internet research.

[4]  N Wood,et al.  Response rate in patient satisfaction research: an analysis of 210 published studies. , 1998, International journal for quality in health care : journal of the International Society for Quality in Health Care.

[5]  A. Marshall Challenges and opportunities for promoting physical activity in the workplace. , 2004, Journal of science and medicine in sport.

[6]  R. Jones,et al.  Health surveys in the workplace: comparison of postal, email and World Wide Web methods. , 1999, Occupational medicine.

[7]  Bonnie Kaplan,et al.  Evaluating informatics applications - clinical decision support systems literature review , 2001, Int. J. Medical Informatics.

[8]  Ebba Þóra Hvannberg,et al.  Analysis of strategies for improving and estimating the effectiveness of heuristic evaluation , 2004, NordiCHI '04.

[9]  David R. Millen,et al.  Designing Features for Display-Based Systems , 1995 .

[10]  Leo Lentz,et al.  The evaluation of text quality: expert-focused and reader-focused methods compared , 1997 .

[11]  Patrick W. Jordan,et al.  An Introduction to Usability , 1998 .

[12]  Matthijs Sienot Pretesting Web Sites , 1997 .

[13]  Niels Peek,et al.  Impact of a Web-Based Worksite Health Promotion Program on Absenteeism , 2012, Journal of occupational and environmental medicine.

[14]  D. Morgan Focus groups for qualitative research. , 1988, Hospital guest relations report.

[15]  Klaus Kaasgaard,et al.  Comparative usability evaluation , 2004, Behav. Inf. Technol..

[16]  C R Doarn,et al.  Heuristic evaluation of a web-based interface for internet telemedicine. , 1999, Telemedicine journal : the official journal of the American Telemedicine Association.

[17]  Robin E. Soler,et al.  A systematic review of selected interventions for worksite health promotion. The assessment of health risks with feedback. , 2010, American journal of preventive medicine.

[18]  P. Whitten,et al.  Systematic review of studies of patient satisfaction with telemedicine , 2000, BMJ : British Medical Journal.

[19]  Alan E. Benson,et al.  Are We Overlooking Some Usability Testing Methods? A Comparison of Lab, Beta, and Forum Tests , 1993, Behav. Inf. Technol..

[20]  Tsai-Ya Lai Iterative refinement of a tailored system for self-care management of depressive symptoms in people living with HIV/AIDS through heuristic evaluation and end user testing , 2007, Int. J. Medical Informatics.

[21]  Ramaratnam R Bishu,et al.  Web evaluation: heuristic evaluation vs. user testing. , 2009 .

[22]  Ersen B. Colkesen,et al.  Initiation of health-behaviour change among employees participating in a web-based health risk assessment with tailored feedback , 2011, Journal of occupational medicine and toxicology.

[23]  Edward H. Shortliffe,et al.  Evaluation Methods in Biomedical Informatics , 2000 .

[24]  Gregory R Wagner,et al.  Worksite Wellness Programs for Cardiovascular Disease Prevention: A Policy Statement From the American Heart Association , 2009, Circulation.

[25]  L Suzanne Suggs,et al.  Application of a Web-Based Tailored Health Risk Assessment in a Work-Site Population , 2007, Health promotion practice.

[26]  G. Eysenbach The Law of Attrition , 2005, Journal of medical Internet research.

[27]  Charles P. Friedman,et al.  Evaluation Methods in Medical Informatics , 1997, Computers and Medicine.

[28]  Mark V. Springett,et al.  A comparison of usability techniques for evaluating design , 1997, DIS '97.

[29]  Gavriel Salvendy,et al.  Effectiveness of user testing and heuristic evaluation as a function of performance classification , 2002, Behav. Inf. Technol..

[30]  A. Poropat,et al.  The impact of information systems on user performance: A critical review and theoretical model , 2009 .

[31]  Monique W. M. Jaspers,et al.  A comparison of usability methods for testing interactive health technologies: Methodological aspects and empirical evidence , 2009, Int. J. Medical Informatics.

[32]  Ronald J Ozminkowski,et al.  The health and cost benefits of work site health-promotion programs. , 2008, Annual review of public health.

[33]  Thea van der Geest,et al.  User-centered evaluation of adaptive and adaptable systems: a literature review , 2008, The Knowledge Engineering Review.

[34]  Peter Jan Schellens,et al.  Toward a document evaluation methodology: what does research tell us about the validity and reliability of evaluation methods? , 2000 .

[35]  Barbara H Wixom,et al.  A Theoretical Integration of User Satisfaction and Technology Acceptance , 2005, Inf. Syst. Res..

[36]  Shailey Minocha,et al.  Usability beyond the website: An empirically-grounded e-commerce evaluation instrument for the total customer experience , 2006, Behav. Inf. Technol..