Comparing User Performance on Parallel-Tone, Parallel-Speech, Serial-Tone and Serial-Speech Auditory Graphs

Visualization techniques such as bar graphs and pie charts let sighted users quickly understand and explore numerical data. These techniques remain by and large inaccessible for visually impaired users. Even when these are made accessible, they remain slow and cumbersome, and not as useful as they might be to sighted users. Previous research has studied two methods of improving perception and speed of navigating auditory graphs - using non-speech audio (such as tones) instead of speech to communicate data and using two audio streams in parallel instead of in series. However, these studies were done in the early 2000s and speech synthesis techniques have improved considerably in recent times, as has the familiarity of visually impaired users with smartphones and speech systems. We systematically compare user performance on four modes that can be used for the generation of auditory graphs: parallel-tone, parallel-speech, serial-tone, and serial-speech. We conducted two within-subjects studies - one with 20 sighted users and the other with 20 visually impaired users. Each user group performed point estimation and point comparison tasks with each technique on two sizes of bar graphs. We assessed task time, errors and user preference. We found that while tone was faster than speech, speech was more accurate than tone. The parallel modality was faster than serial modality and visually impaired users were faster than their sighted counterparts. Further, users showed a strong personal preference towards the serial-speech technique. To the best of our knowledge, this is the first empirical study that systematically compares these four techniques.

[1]  Bruce N. Walker,et al.  PSYCHOPHYSICAL SCALING OF SONIFICATION MAPPINGS: A COMPARISION OF VISUALLY IMPAIRED AND SIGHTED LISTENERS , 2001 .

[2]  Lorna M. Brown,et al.  DESIGN GUIDELINES FOR AUDIO PRESENTATION OF GRAPHS AND TABLES , 2003 .

[3]  Stephen A. Brewster,et al.  Clutching at straws: using tangible interaction to provide non-visual access to graphs , 2010, CHI.

[4]  Tony Stockman,et al.  AUDITORY GRAPHS: A SUMMARY OF CURRENT EXPERIENCE AND TOWARD S A RESEARCH AGENDA , 2005 .

[5]  John H. Flowers,et al.  THIRTEEN YEARS OF REFLECTION ON AUDITORY GRAPHING: PROMISES, PITFALLS, AND POTENTIAL NEW DIRECTIONS , 2005 .

[6]  Lorna M. Brown,et al.  BROWSING MODES FOR EXPLORING SONIFIED LINE GRAPHS , 2002 .

[7]  Ben P. Challis,et al.  Design Principles for Tactile Interaction , 2000, Haptic Human-Computer Interaction.

[8]  Stephen A. Brewster,et al.  Sensory substitution using tactile pin arrays: Human factors, technology and applications , 2006, Signal Process..

[9]  Albert S. Bregman,et al.  The Auditory Scene. (Book Reviews: Auditory Scene Analysis. The Perceptual Organization of Sound.) , 1990 .

[10]  Hallowell Davis,et al.  Factors in the Production of Aural Harmonics and Combination Tones , 1937 .

[11]  Stephen Brewster,et al.  Experimentally Derived Guidelines for the Creation of Earcons , 2001 .

[12]  Diana Deutsch,et al.  Grouping Mechanisms in Music , 1999 .

[13]  Andrea R. Kennel Audiograf: a diagram-reader for the blind , 1996, Assets '96.

[14]  Priti Shah,et al.  A Model of the Perceptual and Conceptual Processes in Graph Comprehension , 1998 .

[15]  Michael Gertz,et al.  A Model and Framework for Visualization Exploration , 2007, IEEE Transactions on Visualization and Computer Graphics.

[16]  Terri L. Bonebright,et al.  Testing the effectiveness of sonified graphs for education: A programmatic research project , 2001 .

[17]  Edward S. Yeung,et al.  Pattern recognition by audio representation of multivariate analytical data , 1980 .

[18]  Bruce N. Walker,et al.  ENCODING AND REPRESENTATION OF INFORMATION IN AUDITORY GRAPHS: DESCRIPTIVE REPORTS OF LISTENER STRATEGIES FOR UNDERSTANDING DATA , 2008 .

[19]  S. Camille Peres,et al.  Sonification of statistical graphs , 2003 .

[20]  N. Cowan The magical number 4 in short-term memory: A reconsideration of mental storage capacity , 2001, Behavioral and Brain Sciences.

[21]  Stephen Brewster,et al.  PROVIDING A SIZE-INDEPENDENT OVERVIEW OF NON-VISUAL TABLES , 2006 .

[22]  Steve Pettifer,et al.  Evaluation of a non-visual molecule browser , 2003, Assets '04.

[23]  Bruce N. Walker,et al.  RELATIVE INTENSITY OF AUDITORY CONTEXT FOR AUDITORY GRAPH DESIGN , 2006 .

[24]  John H. Flowers,et al.  “Sound” alternatives to visual graphics for exploratory data analysis , 1993 .

[25]  Lorna M. Brown,et al.  DRAWING BY EAR: INTERPRETING SONIFIED LINE GRAPHS , 2003 .

[26]  Wai Yu,et al.  Using Non-speech Sounds to Improve Access to 2D Tabular Numerical Information for Visually Impaired Users , 2001, BCS HCI/IHM.

[27]  Christoph Schlieder,et al.  Automated interpretation and accessible presentation of technical diagrams for blind people , 2004 .

[28]  Sara Bly,et al.  Presenting information in sound , 1982, CHI '82.

[29]  Ben Shneiderman,et al.  Data Sonification for Users with Visual Impairment: A Case Study with Georeferenced Data , 2008, TCHI.

[30]  I. Pollack,et al.  The Information of Elementary Multidimensional Auditory Displays , 1954 .

[31]  J. B. Brooke,et al.  SUS: A 'Quick and Dirty' Usability Scale , 1996 .

[32]  John H. Flowers,et al.  SONIFICATION OF DAILY WEATHER RECORDS: ISSUES OF PERCEPTION, ATTENTION AND MEMORY IN DESIGN CHOICES , 2001 .

[33]  John A. Gardner,et al.  Triangle: A Tri-Modal Access Program for Reading, Writing, and Doing Math , 1998 .

[34]  Alain de Cheveigné,et al.  Pitch perception models , 2005 .

[35]  Bruce N. Walker,et al.  Effects of Training and Auditory Context on Performance of a Point Estimation Sonification Task , 2004 .

[36]  Catherine Connolly Gomez,et al.  The Effects of Training on Cognitive Capacity Demands for Synthetic Speech , 1994 .

[37]  Bruce N. Walker,et al.  Universal Design of Auditory Graphs: A Comparison of Sonification Mappings for Visually Impaired and Sighted Listeners , 2010, TACC.