Contrasting usability evaluation methods with blind users

Usability tests are a part of user-centered design. Usability testing with disabled people is necessary, if they are among the potential users. Several researchers have already investigated usability methods with sighted people. However, research with blind users is insufficient, for example, due to different knowledge on the use of assistive technologies and the ability to analyze usability issues from inspection of non-visual output of assistive devices. From here, the authors aspire to extend theory and practice by investigating four usability methods involving the blind, visually impaired and sighted people. These usability methods comprise of local test, synchronous remote test, tactile paper prototyping and computer-based prototyping. In terms of effectiveness of evaluation and the experience of participants and the facilitator, local tests were compared with synchronous remote tests and tactile paper prototyping with computer-based prototyping. Through the comparison of local and synchronous remote tests, it has been found that the number of usability problems uncovered in different categories with both approaches was comparable. In terms of task completion time, there is a significant difference for blind participants, but not for the visually impaired and sighted. Most of the blind and visually impaired participants prefer the local test. As for the comparison of tactile paper prototyping and computer-based prototyping, it has been revealed that tactile paper prototyping provides a better overview of an application while the interaction with computer-based prototypes is closer to reality. Problems regarding the planning and conducting of these methods as they arise in particular with blind people were also discussed. Based on the authors’ experiences, recommendations were provided for dealing with these problems from both the technical and the organization perspectives.

[1]  Gerhard Weber,et al.  Tactile Paper Prototyping with Blind Subjects , 2009, HAID.

[2]  Larry Wood Review of GUI bloopers: don'ts and do's for software developers and Web designers by Jeff Johnson. Morgan Kaufmann. , 2001, CHIB.

[3]  Gregg C. Vanderheiden,et al.  Web Content Accessibility Guidelines (WCAG) 2.0 , 2008 .

[4]  John T. Kelso,et al.  Remote evaluation: the network as an extension of the usability laboratory , 1996, CHI.

[5]  Jakob Nielsen,et al.  Prioritizing Web Usability , 2006 .

[6]  Jeff Johnson,et al.  GUI bloopers (book excerpt): don't's and do's for software developers and Web designers , 2000, UBIQ.

[7]  Prakaash V. Selvaraj Comparative Study of Synchronous Remote and Traditional In-Lab Usability Evaluation Methods. , 2004 .

[8]  James A. Landay,et al.  High-Fidelity or Low-Fidelity, Paper or Computer? Choosing Attributes when Testing Web Prototypes , 2002 .

[9]  Janet Davis,et al.  A comparison of synchronous remote and local usability studies for an expert interface , 2004, CHI EA '04.

[10]  Jakob Nielsen,et al.  Chapter 6 – Usability Testing , 1993 .

[11]  Helen Petrie,et al.  Remote usability evaluations With disabled people , 2006, CHI.

[12]  H. Rex Hartson,et al.  Remote evaluation for post-deployment usability improvement , 1998, AVI '98.

[13]  Deborah J. Mayhew,et al.  The usability engineering lifecycle , 1999, CHI Extended Abstracts.

[14]  Jakob Nielsen,et al.  Usability engineering , 1997, The Computer Science and Engineering Handbook.

[15]  Demetrios Karis,et al.  Usability problem identification using both low- and high-fidelity prototypes , 1996, CHI.

[16]  Kenneth R. Stern,et al.  Low vs. high-fidelity prototyping debate , 1996, INTR.