Usage of Subjective Scales in Accessibility Research

Accessibility research studies often gather subjective responses to technology using Likert-type items, where participants respond to a prompt statement by selecting a position on a labeled response scale. We analyzed recent ASSETS papers, and found that participants in non-anonymous accessibility research studies gave more positive average ratings than those in typical usability studies, especially when responding to questions about a proposed innovation. We further explored potential positive response bias in an experimental study of two telephone information systems, one more usable than the other. We found that participants with visual impairment were less sensitive to usability problems than participants in a typical student sample, and that their subjective ratings didn't correlate as strongly with objective measures of performance. A deeper understanding of the mechanism behind this effect would help researchers to design better accessibility studies, and to interpret subjective ratings with more accuracy.

[1]  Vicki L. Hanson,et al.  A large user pool for accessibility research with representative users , 2014, ASSETS.

[2]  留美 種村,et al.  高次脳機能障害に対するAssistive Technology による支援 , 2016 .

[3]  Andrew Sears,et al.  Representing users in accessibility research , 2012, TACC.

[4]  Alexis Héloir,et al.  Assessing the deaf user perspective on sign language avatars , 2011, ASSETS.

[5]  Oscar Mauricio Serrano Jaimes,et al.  EVALUACION DE LA USABILIDAD EN SITIOS WEB, BASADA EN EL ESTANDAR ISO 9241-11 (International Standard (1998) Ergonomic requirements For office work with visual display terminals (VDTs)-Parts II: Guidance on usability , 2012 .

[6]  F. Strack,et al.  The impact of administration mode on response effects in survey measurement , 1991 .

[7]  Joseph S. Dumas,et al.  Comparison of three one-question, post-task usability questionnaires , 2009, CHI.

[8]  Jane Webster,et al.  Perceived disorientation: an examination of a new measure to assess web design effectiveness , 2001, Interact. Comput..

[9]  D. Paulhus Two-component models of socially desirable responding. , 1984 .

[10]  L Demers,et al.  Development of the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST). , 1996, Assistive technology : the official journal of RESNA.

[11]  R. Likert “Technique for the Measurement of Attitudes, A” , 2022, The SAGE Encyclopedia of Research Design.

[12]  Jeff Sauro,et al.  Correlations among prototypical usability metrics: evidence for the construct of usability , 2009, CHI.

[13]  Fritz Drasgow,et al.  A Meta-Analytic Study of Social Desirability Distortion in Computer- Administered Questionnaires, Traditional Questionnaires, and Interviews , 1999 .

[14]  Regan L. Mandryk,et al.  Wheelchair-based game design for older adults , 2013, ASSETS.

[15]  Richard E. Ladner,et al.  Evaluating quality and comprehension of real-time sign language video on mobile phones , 2011, ASSETS '11.

[16]  J. B. Brooke,et al.  SUS: a retrospective , 2013 .

[17]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[18]  Randolph G. Bias,et al.  Research Methods for Human-Computer Interaction , 2010, J. Assoc. Inf. Sci. Technol..

[19]  Hironobu Takagi,et al.  How voice augmentation supports elderly web users , 2011, ASSETS.

[20]  Panos Markopoulos,et al.  Powerful and consistent analysis of likert-type rating scales , 2010, CHI.

[21]  Tao Yang,et al.  DEEP: Design-Oriented Evaluation of Perceived Usability , 2012, Int. J. Hum. Comput. Interact..

[22]  Philip T. Kortum,et al.  Determining what individual SUS scores mean: adding an adjective rating scale , 2009 .

[23]  Vicki L. Hanson,et al.  Exploring Visual and Motor Accessibility in Navigating a Virtual World , 2009, TACC.

[24]  Jakob Nielsen,et al.  Measuring usability: preference vs. performance , 1994, CACM.

[25]  Edward Cutrell,et al.  "Yours is better!": participant response bias in HCI , 2012, CHI.