Exploration of Head Related Transfer Function and Environmental Sounds as a Means to Improve Auditory Scanning for Children Requiring Augmentative and Alternative Communication

ABSTRACT Many individuals who require augmentative and alternative communication (AAC) cannot directly select items on computer-based displays. Individuals who also have visual impairments may need to rely on sequential announcement of array choices in auditory scanning. The method is challenging and there is a lack of research to improve this access method. Two potential solutions were tested: using environmental sounds to represent items (e.g., the sound of a clock ticking for a clock) and providing spatial cues regarding the organization of items (e.g., presenting auditory information and altering temporal and spectral features so that sounds are heard as left, right, up, or down relative to each other). The individual and combined effects of these cues were tested with typically developing 3-year-old children. After collecting and validating a set of stimulus sounds, 24 children participated in a within-subjects design with four conditions (spoken word label only, associated environmental sound only, spoken word label with spatial information, associated environmental sound plus spatial information). Dependent measures included reaction time (RT) and accuracy. Results indicated that the use of sounds without any spatial cues revealed slower RTs than any other conditions. Also, sounds regardless of spatial cues led to less accurate scores than words.

[1]  Iiro P. Jääskeläinen,et al.  Psychophysics and neuronal bases of sound localization in humans , 2014, Hearing Research.

[2]  Barbara A Hotelling,et al.  Newborn Capabilities: Parent Teaching Is a Necessity , 2004, Journal of Perinatal Education.

[3]  Mexhid Ferati,et al.  Audemes at work: Investigating features of non-speech sounds to maximize content recognition , 2012, Int. J. Hum. Comput. Stud..

[4]  Kathryn Drager,et al.  Re-designing scanning to reduce learning demands: The performance of typically developing 2-year-olds , 2006, Augmentative and alternative communication.

[5]  György Wersényi Auditory Representations of a Graphical User Interface for a Better Human-Computer Interaction , 2009, CMMR/ICAD.

[6]  E. Clark Awareness of Language: Some Evidence from what Children Say and Do , 1978 .

[7]  M. J. Sharps,et al.  Auditory imagery and free recall. , 1992, The Journal of general psychology.

[8]  A. Baddeley The episodic buffer: a new component of working memory? , 2000, Trends in Cognitive Sciences.

[9]  J. DiGiovanni,et al.  Auditory Stroop Using Spatial Stimuli , 2017 .

[10]  Kathryn D R Drager,et al.  Learning of dynamic display AAC technologies by typically developing 3-year-olds: effect of different layouts and menu approaches. , 2004, Journal of speech, language, and hearing research : JSLHR.

[11]  Kathryn D R Drager,et al.  The performance of typically developing 2 1/2-year-olds on dynamic display AAC technologies with different system layouts and language organizations. , 2003, Journal of speech, language, and hearing research : JSLHR.

[12]  J. Hadwin,et al.  Psychometric properties of reaction time based experimental paradigms measuring anxiety-related information-processing biases in children. , 2014, Journal of anxiety disorders.

[13]  Alan Kan,et al.  Use of non-individualized head-related transfer functions to measure spatial release from masking in children with normal hearing , 2018 .

[14]  Kathryn D. R. Drager,et al.  Performance of Typically Developing Four- and Five-Year-Old Children with AAC Systems using Different Language Organization Techniques , 2004 .

[15]  R. Duraiswami,et al.  Insights into head-related transfer function: Spatial dimensionality and continuous representation. , 2010, The Journal of the Acoustical Society of America.

[16]  John W McCarthy,et al.  A comparison of the performance of 2.5 to 3.5-year-old children without disabilities using animated and cursor-based scanning in a contextual scene , 2018, Assistive technology : the official journal of RESNA.

[17]  Melody M. Moore,et al.  Representing Graphical User Interfaces with Sound: A Review of Approaches , 2005 .