Evaluating visual analytics for health informatics applications: a systematic review from the American Medical Informatics Association Visual Analytics Working Group Task Force on Evaluation

OBJECTIVE This article reports results from a systematic literature review related to the evaluation of data visualizations and visual analytics technologies within the health informatics domain. The review aims to (1) characterize the variety of evaluation methods used within the health informatics community and (2) identify best practices. METHODS A systematic literature review was conducted following PRISMA guidelines. PubMed searches were conducted in February 2017 using search terms representing key concepts of interest: health care settings, visualization, and evaluation. References were also screened for eligibility. Data were extracted from included studies and analyzed using a PICOS framework: Participants, Interventions, Comparators, Outcomes, and Study Design. RESULTS After screening, 76 publications met the review criteria. Publications varied across all PICOS dimensions. The most common audience was healthcare providers (n = 43), and the most common data gathering methods were direct observation (n = 30) and surveys (n = 27). About half of the publications focused on static, concentrated views of data with visuals (n = 36). Evaluations were heterogeneous regarding setting and measurements used. DISCUSSION When evaluating data visualizations and visual analytics technologies, a variety of approaches have been used. Usability measures were used most often in early (prototype) implementations, whereas clinical outcomes were most common in evaluations of operationally-deployed systems. These findings suggest opportunities for both (1) expanding evaluation practices, and (2) innovation with respect to evaluation methods for data visualizations and visual analytics technologies across health settings. CONCLUSION Evaluation approaches are varied. New studies should adopt commonly reported metrics, context-appropriate study designs, and phased evaluation strategies.

[1]  Betty Bekemeier,et al.  Development of the PHAST model: generating standard public health services data and evidence for decision-making , 2018, J. Am. Medical Informatics Assoc..

[2]  Jesus J. Caban,et al.  Visual analytics in healthcare - opportunities and research challenges , 2015, J. Am. Medical Informatics Assoc..

[3]  M. Sheelagh T. Carpendale,et al.  Evaluating Information Visualizations , 2008, Information Visualization.

[4]  Chris North,et al.  Toward measuring visualization insight , 2006, IEEE Computer Graphics and Applications.

[5]  Jacob Cohen A Coefficient of Agreement for Nominal Scales , 1960 .

[6]  Sia Siew Kien,et al.  Global IT management: structuring for scale, responsiveness, and innovation , 2010, CACM.

[7]  Ben Shneiderman,et al.  The eyes have it: a task by data type taxonomy for information visualizations , 1996, Proceedings 1996 IEEE Symposium on Visual Languages.

[8]  Tamara Munzner,et al.  A Nested Model for Visualization Design and Validation , 2009, IEEE Transactions on Visualization and Computer Graphics.

[9]  Jason E. Stewart,et al.  Minimum information about a microarray experiment (MIAME)—toward standards for microarray data , 2001, Nature Genetics.

[10]  Jeffrey Heer,et al.  A Tour through the Visualization Zoo , 2010 .

[11]  Tamara Munzner,et al.  The four-level nested model revisited: blocks and guidelines , 2012, BELIV '12.

[12]  Jarke J. van Wijk,et al.  Evaluation: A Challenge for Visual Analytics , 2013, Computer.

[13]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[14]  Tamara Munzner,et al.  The nested blocks and guidelines model , 2015, Inf. Vis..

[15]  Chris North,et al.  Information Visualization , 2008, Lecture Notes in Computer Science.

[16]  Margaret Burnett,et al.  Proceedings 1996 IEEE Symposium on Visual Languages , 1996 .

[17]  Hsiu-Fang Hsieh,et al.  Three Approaches to Qualitative Content Analysis , 2005, Qualitative health research.

[18]  Jean Scholtz,et al.  Visual-Analytics Evaluation , 2009, IEEE Computer Graphics and Applications.

[19]  Kristin A. Cook,et al.  Illuminating the Path: The Research and Development Agenda for Visual Analytics , 2005 .

[20]  Ben Shneiderman,et al.  Strategies for evaluating information visualization tools: multi-dimensional in-depth long-term case studies , 2006, BELIV '06.

[21]  Edward H. Shortliffe,et al.  Evaluation Methods in Biomedical Informatics , 2000 .

[22]  David Borland,et al.  Data-Driven Healthcare: Challenges and Opportunities for Interactive Visualization , 2016, IEEE Computer Graphics and Applications.

[23]  Vivian West,et al.  Innovative information visualization of electronic health record data: a systematic review , 2014, J. Am. Medical Informatics Assoc..

[24]  D M Eddy,et al.  Practice policies: where do they come from? , 1990, JAMA.

[25]  Jean Scholtz,et al.  User-Centered Evaluation of Visual Analytics , 2017, User-Centered Evaluation of Visual Analytics.

[26]  J. Ioannidis,et al.  The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration , 2009, BMJ : British Medical Journal.

[27]  Ben Shneiderman,et al.  Readings in information visualization - using vision to think , 1999 .

[28]  P. Krebs,et al.  Health App Use Among US Mobile Phone Owners: A National Survey , 2015, JMIR mHealth and uHealth.