Evaluating the Downstream Utility of User Tests and Examining the Developer Effect: A Case Study

In this article, a case study on evaluating the downstream utility of user tests performed on a digital library with reference to the developer effect is reported. Downstream utility is defined as the effectiveness with which the resolution to a usability problem (UP) is implemented, and developer effect is defined as developers' bias toward fixing UPs with particular characteristics. To measure the effectiveness of the user tests, the actual impacts of fixing or not fixing the UPs identified were analyzed. To address a theoretical void in studying the persuasive power of usability evaluation results, Information Integration Theory was employed. Six research questions that predict the persuasiveness of different qualities of usability problems to induce fixes and the effectiveness of such fixes were investigated. Multiperspective data have been collected from usability specialists, the development team of the digital library, and its old as well as new users. Implications for reporting UPs and future research are inferred.

[1]  Bonnie E. John,et al.  Evaluating a Multimedia Authoring Tool , 1997, J. Am. Soc. Inf. Sci..

[2]  Norman H. Anderson,et al.  Contributions to information integration theory , 1991 .

[3]  N. Anderson Foundations of information integration theory , 1981 .

[4]  Jacqueline Brodie,et al.  Applying user testing data to UEM performance metrics , 2004, CHI EA '04.

[5]  Ebba Þóra Hvannberg,et al.  Basic concepts , 2020, Fluid Flow in Porous Media.

[6]  DesurvireHeather,et al.  Usability testing vs. heuristic evaluation , 1992 .

[7]  Ebba Þóra Hvannberg,et al.  Analysis of combinatorial user effect in international usability tests , 2004, CHI '04.

[8]  Robert C. Williges,et al.  Criteria For Evaluating Usability Evaluation Methods , 2001, Int. J. Hum. Comput. Interact..

[9]  Clare-Marie Karat,et al.  Comparison of empirical testing and walkthrough methods in user interface evaluation , 1992, CHI.

[10]  N. Anderson A Functional Theory of Cognition , 1996 .

[11]  Dennis Wixon Book Review: Cost Justifying Usability: Edited by Randolph G. Bias and Deborah J. Mayhew , 1995, SGCH.

[12]  Steven M. Belz,et al.  The user action framework: a reliable foundation for usability engineering support tools , 2001, Int. J. Hum. Comput. Stud..

[13]  Mark V. Springett,et al.  A comparison of usability techniques for evaluating design , 1997, DIS '97.

[14]  Dennis R. Wixon Evaluating usability methods: why the current literature fails the practitioner , 2003, INTR.

[15]  Bonnie E. John On our case study of claims analysis and other usability evaluation methods , 1998, Behav. Inf. Technol..

[16]  Wayne D. Gray,et al.  Damaged Merchandise? A Review of Experiments That Compare Usability Evaluation Methods , 1998, Hum. Comput. Interact..

[17]  Kasper Hornbæk,et al.  Comparing usability problems and redesign proposals as input to practical systems development , 2005, CHI.

[18]  J R Lewis,et al.  Sample Sizes for Usability Studies: Additional Considerations , 1994, Human factors.

[19]  S. Chaiken,et al.  The psychology of attitudes. , 1993 .

[20]  Klaus Kaasgaard,et al.  Comparative usability evaluation , 2004, Behav. Inf. Technol..

[21]  Dennis R. Wixon,et al.  Making a difference—the impact of inspections , 1996, CHI.

[22]  Friedrich Wilkening,et al.  Combining of stimulus dimensions in children's and adults' judgments of area: An information integration analysis. , 1979 .

[23]  L. Festinger,et al.  A Theory of Cognitive Dissonance , 2017 .

[24]  Kasper Hornbæk,et al.  Cooperative usability testing: complementing usability tests with user-supported interpretation sessions , 2005, CHI Extended Abstracts.

[25]  Randy J. Pagulayan,et al.  User-centered design in games , 2012 .

[26]  Robert W. Bailey,et al.  Usability Testing vs. Heuristic Evaluation: A Head-to-Head Comparison , 1992 .

[27]  Joseph F. Dumas,et al.  A Practical Guide to Usability Testing , 1993 .

[28]  Michael Burmester,et al.  Hedonic and ergonomic quality aspects determine a software's appeal , 2000, CHI.

[29]  Sara Jones,et al.  Usability for fun and profit: a case study of the design of DEC Rally Version 2 , 1995 .

[30]  M. Sherif,et al.  The psychology of attitudes. , 1946, Psychological review.

[31]  R. Yin Case Study Research: Design and Methods , 1984 .

[32]  Andrew Sears,et al.  Heuristic Walkthroughs: Finding the Problems Without the Noise , 1997, Int. J. Hum. Comput. Interact..

[33]  Bonnie E. John,et al.  Tracking the effectiveness of usability evaluation methods , 1997, Behav. Inf. Technol..

[34]  Morten Hertzum,et al.  The Evaluator Effect: A Chilling Fact About Usability Evaluation Methods , 2001, Int. J. Hum. Comput. Interact..

[35]  Robin Jeffries,et al.  User interface evaluation in the real world: a comparison of four techniques , 1991, CHI.

[36]  Dennis R. Wixon,et al.  Using the RITE method to improve products; a definition and a case study , 2007 .

[37]  Dennis E. Egan,et al.  Handbook of Human Computer Interaction , 1988 .

[38]  Norman H. Anderson,et al.  Methods of information integration theory , 1982 .