Maximum Likelihood Integration of rapid flashes and beeps

Maximum likelihood models of multisensory integration are theoretically attractive because the goals and assumptions of sensory information processing are explicitly stated in such optimal models. When subjects perceive stimuli categorically, as opposed to on a continuous scale, Maximum Likelihood Integration (MLI) can occur before or after categorization-early or late. We introduce early MLI and apply it to the audiovisual perception of rapid beeps and flashes. We compare it to late MLI and show that early MLI is a better fitting and more parsimonious model. We also show that early MLI is better able to account for the effects of information reliability, modality appropriateness and intermodal attention which affect multisensory perception.

[1]  Klucharev Vasily,et al.  Electrophysiological indicators of phonetic and non-phonetic multisensory interactions during audiovisual speech perception. , 2003 .

[2]  D. Burr,et al.  The Ventriloquist Effect Results from Near-Optimal Bimodal Integration , 2004, Current Biology.

[3]  D. H. Warren,et al.  Immediate perceptual response to intersensory discrepancy. , 1980, Psychological bulletin.

[4]  Robert A Jacobs,et al.  Bayesian integration of visual and auditory signals for spatial localization. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.

[5]  S. Shimojo,et al.  Illusions: What you see is what you hear , 2000, Nature.

[6]  Mikko Sams,et al.  Factors influencing audiovisual fission and fusion illusions. , 2004, Brain research. Cognitive brain research.

[7]  S. Shimojo,et al.  Visual illusion induced by sound. , 2002, Brain research. Cognitive brain research.

[8]  M. Ernst,et al.  Humans integrate visual and haptic information in a statistically optimal fashion , 2002, Nature.

[9]  E. Bizzi,et al.  The Cognitive Neurosciences , 1996 .

[10]  D. M. Green,et al.  Signal detection theory and psychophysics , 1966 .

[11]  M. Sams,et al.  Time course of multisensory interactions during audiovisual speech perception in humans: a magnetoencephalographic study , 2004, Neuroscience Letters.

[12]  R. Campbell,et al.  Hearing by eye 2 : advances in the psychology of speechreading and auditory-visual speech , 1997 .

[13]  Mikko Sams,et al.  Using the Fuzzy Logical Model of Perception in measuring integration of audiovisual speech in humans , 2002 .

[14]  D. Massaro,et al.  Perceiving Talking Faces , 1995 .

[15]  Jean-Luc Schwartz,et al.  Why the FLMP should not be applied to McGurk data ...or how to better compare models in the Bayesian framework , 2003, AVSP.

[16]  H. Bülthoff,et al.  Merging the senses into a robust percept , 2004, Trends in Cognitive Sciences.

[17]  P. Verghese Visual Search and Attention A Signal Detection Theory Approach , 2001, Neuron.

[18]  S. Shimojo,et al.  Sound alters visual evoked potentials in humans , 2001, Neuroreport.

[19]  M. Giard,et al.  Auditory-Visual Integration during Multimodal Object Recognition in Humans: A Behavioral and Electrophysiological Study , 1999, Journal of Cognitive Neuroscience.

[20]  Alan Agresti,et al.  Categorical Data Analysis , 1991, International Encyclopedia of Statistical Science.

[21]  D H Warren,et al.  Spatial Localization under Conflict Conditions: Is There a Single Explanation? , 1979, Perception.