How to Conduct a Heuristic Evaluation

Participants in a course on usability inspection methods were surveyed 7-8 months after the course to find out what methods they were in fact using, and why they used or did not use the methods they had been taught. The major factor in method usage was the quality of the usability information gained from the method, with a very strong correlation between the rated benefit of using a method and the number of times the method had been used. Even though the respondents came from companies with above-average usability budgets (7% of development budgets were devoted to usability), the cost of using the methods was also a very strong factor in determining use. Other observations were that technology transfer was most successful when methods were taught at the time when people had a specific need for them in their project, and that methods need to have active evangelists to succeed. The Need for More Usable Usability User interface professionals ought to take their own medicine some more. How often have we heard UI folks complain that "we get no respect" (from development managers)? At the same time, we have nothing but scorn for any programmer who has the attitude that if users have problems with his or her program then it must be the users' fault. If we consider usability engineering as a system, a design, or a set of interfaces with which development managers have to interact, then it obviously becomes the usability professionals' responsibility to design that system to maximize its communication with its users. My claim is that any problems in getting usability results used more in development are more due to lack of usability of the usability methods and results than they are caused by evil development managers who deliberately want to torment their users. In order to get usability methods used more in real development projects, we must make the usability methods easier to use and more attractive. One way of doing so is to consider the way current usability methods are being used and what causes some methods to be used and others to remain "a good idea which we might try on the next project." As an example of such studies I will report on a study of what causes usability inspection methods to be used. Usability Inspection Methods Usability inspection (Nielsen and Mack, 1994) is the generic name for a set of methods based on having evaluators inspect or examine usability-related aspects of a user interface. Some evaluators can be usability specialists, but they can also be software development consultants with special expertise (e.g., knowledge of a particular interface style for graphical user interfaces), end users with content or task knowledge, or other types of professionals. The different inspection methods have slightly different goals, but normally usability inspection is intended as a way of evaluating user interface designs to find usability problems. In usability inspection, the evaluation of the user interface is based on the considered judgment of the inspector(s). The individual inspection methods vary as to how this judgment is derived and on what evaluative criteria inspectors are expected to base their judgments. In general, the defining characteristic of usability inspection is the reliance on judgment as a source of evaluative feedback on specific elements of a user interface. See the appendix for a short summary of the individual usability inspection methods discussed in this paper. Usability inspection methods were first described in formal presentations in 1990 at the CHI'90 conference where papers were published on heuristic evaluation (Nielsen and Molich, 1990) and cognitive walkthroughs (Lewis et al., 1990). Now, only four to five years later, usability inspection methods have become some of the most widely used methods in the industry. As an example, in his closing plenary address at the Usability Professionals' Association's annual meeting in 1994 (UPA'94), Ken Dye, usability manager at Microsoft, listed the four major recent changes in Microsoft's approach to usability as: • Use of heuristic evaluation • Use of "discount" user testing with small sample sizes • Contextual inquiry • Use of paper mock-ups as low-fidelity prototypes Many other companies and usability consultants are also known to have embraced heuristic evaluation and other inspection methods in recent years. Here is an example of an email message I received from one consultant in August 1994: "I am working [...] with an airline client. We have performed so far, 2 iterations of usability [...], the first being a heuristic evaluation. It provided us with tremendous information, and we were able to convince the client of its utility [...]. We saved them a lot of money, and are now ready to do a full lab usability test in 2 weeks. Once we're through that, we may still do more heuristic evaluation for some of the finer points." Work on the various usability inspection methods obviously started several years before the first formal conference presentations. Even so, current use of heuristic evaluation and other usability inspection methods is still a remarkable example of rapid technology transfer from research to practice over a period of very few years.

[1]  Jakob Nielsen,et al.  Finding usability problems through heuristic evaluation , 1992, CHI.

[2]  Juan Barahona,et al.  Marketplace of Ideas , 2003 .

[3]  Cathleen Wharton,et al.  Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces , 1990, CHI '90.

[4]  Randolph G. Bias,et al.  The pluralistic usability walkthrough: coordinated empathies , 1994 .

[5]  Jakob Nielsen,et al.  A mathematical model of the finding of usability problems , 1993, INTERCHI.

[6]  Jakob Nielsen,et al.  Heuristic Evaluation of Prototypes (individual) , 2022 .

[7]  Michael J. Kahn,et al.  Formal usability inspections , 1994 .

[8]  Albert N. Badre,et al.  Discount Usability Engineering , 1997, HCI.

[9]  Jakob Nielsen,et al.  Estimating the relative usability of two interfaces: heuristic, formal, and empirical methods compared , 1993, INTERCHI.

[10]  Jakob Nielsen,et al.  Improving a human-computer dialogue , 1990, CACM.

[11]  Cathleen Wharton,et al.  The cognitive walkthrough method: a practitioner's guide , 1994 .

[12]  Jakob Nielsen,et al.  Paper versus computer implementations as mockup scenarios for heuristic evaluation , 1990, INTERACT.

[13]  Jakob Nielsen,et al.  Heuristic evaluation of user interfaces , 1990, CHI '90.

[14]  Michael E. Atwood,et al.  What is gained and lost when using evaluation methods other than empirical testing , 1993 .

[15]  Robert E. Kraut,et al.  Iterative design of video communication systems , 1992, CSCW '92.

[16]  Jakob Nielsen,et al.  Enhancing the explanatory power of usability heuristics , 1994, CHI '94.

[17]  Brigham Bell Using programming walkthroughs to design a visual language , 1992 .

[18]  Robin Jeffries,et al.  User interface evaluation in the real world: a comparison of four techniques , 1991, CHI.

[19]  Clare-Marie Karat,et al.  Comparison of empirical testing and walkthrough methods in user interface evaluation , 1992, CHI.

[20]  George Casaday,et al.  Inspections and design reviews: framework, history and reflection , 1994 .