What studies of audio-visual integration do not teach us about audio-visual integration
暂无分享,去创建一个
Auditory perception depends on more than just the processing of acoustic stimuli. Visual stimuli can also have a profound influence on listening. Salient examples of such effects include spatial ventriloquism—in which the location percept of an auditory stimulus is “captured” by that of a simultaneous visual stimulus—as well as drastically improved understanding of speech in noise when the talker's face is visible to the listener. These phenomena are typically described as “audio-visual integration,” and are often well modeled, as in the case of the ventriloquist effect, by ideal Bayesian causal inference. However, there is an over-reliance in these studies on single pairs of stimuli (i.e., one auditory and one visual stimulus) and the nature in which cross-modal discrepancies are resolved. This talk will first discuss two problems resulting from that: first, there is ambiguity about whether the integration occurs from weighing two independent sensory estimates or a single bound percept, and second, the design is less useful for studying integration when the stimuli are congruent. The talk will then describe recent work from our lab focused on new designs using multiple stimuli in an attempt to alleviate these issues and inform better models of integration.Auditory perception depends on more than just the processing of acoustic stimuli. Visual stimuli can also have a profound influence on listening. Salient examples of such effects include spatial ventriloquism—in which the location percept of an auditory stimulus is “captured” by that of a simultaneous visual stimulus—as well as drastically improved understanding of speech in noise when the talker's face is visible to the listener. These phenomena are typically described as “audio-visual integration,” and are often well modeled, as in the case of the ventriloquist effect, by ideal Bayesian causal inference. However, there is an over-reliance in these studies on single pairs of stimuli (i.e., one auditory and one visual stimulus) and the nature in which cross-modal discrepancies are resolved. This talk will first discuss two problems resulting from that: first, there is ambiguity about whether the integration occurs from weighing two independent sensory estimates or a single bound percept, and second, the d...