Do multimodal signals need to come from the same place? Crossmodal attentional links between proximal and distal surfaces

Previous research has shown that the use of multimodal signals can lead to faster and more accurate responses compared to purely unimodal displays. However, in most cases response facilitation only occurs when the signals are presented in roughly the same spatial location. This would suggest a severe restriction on interface designers: to use multimodal displays effectively all signals must be presented from the same location on the display. We previously reported evidence that the use of haptic cues may provide a solution to this problem as haptic cues presented to a user's back can be used to redirect visual attention to locations on a screen in front of the user (Tan et al., 2001). In the present experiment we used a visual change detection task to investigate whether (i) this type of visual-haptic interaction is robust at low cue validity rates and (ii) similar effects occur for auditory cues. Valid haptic cues resulted in significantly faster change detection times even when they accurately indicated the location of the change on only 20% of the trials. Auditory cues had a much smaller effect on detection times at the high validity rate (80%) than haptic cues and did not significantly improve performance at the 20% validity rate. These results suggest that the use haptic attentional cues may be particularly effective in environments in which information cannot be presented in the same spatial location.

[1]  C. Spence,et al.  Cross-modal links in spatial attention. , 1998, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[2]  J. Driver,et al.  Audiovisual links in endogenous covert spatial attention. , 1996, Journal of experimental psychology. Human perception and performance.

[3]  Ryan M. Traylor,et al.  A Haptic Back Display for Attentional and Directional Cueing , 2003 .

[4]  Michael S. Wogalter,et al.  Behavioral compliance with warnings: effects of voice, context, and location , 1993 .

[5]  D. Allport,et al.  On the Division of Attention: A Disproof of the Single Channel Hypothesis , 1972, The Quarterly journal of experimental psychology.

[6]  Ronald A. Rensink,et al.  TO SEE OR NOT TO SEE: The Need for Attention to Perceive Changes in Scenes , 1997 .

[7]  C. Spence,et al.  Cross-modal links in exogenous covert spatial orienting between touch, audition, and vision , 1998, Perception & psychophysics.

[8]  Piti Irawan,et al.  Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces , 2001 .

[9]  C. Spence,et al.  The cost of expecting events in the wrong sensory modality , 2001, Perception & psychophysics.

[10]  Jason B. Mattingley,et al.  Preserved cross-modal attentional links in the absence of conscious vision: Evidence from patients with primary visual cortex lesions. , 2000 .

[11]  B. Stein,et al.  Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors , 1987, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[12]  Brian J. Scholl,et al.  Attenuated Change Blindness for Exogenously Attended Items in a Flicker Paradigm , 2000 .