Generative Probabilistic Modeling: Understanding Causal Sensorimotor Integration

In this chapter, we argue that many aspects of human perception are best explained by adopting a modeling approach in which experimental subjects are assumed to possess a full generative probabilistic model of the task they are faced with, and that they use this model to make inferences about their environment and act optimally given the information available to them. We apply this generative modeling framework in two diverse settings—concurrent sensory andmotor adaptation, andmultisensory oddity detection—and show, in both cases, that the data are best described by a full generative modeling approach. Bayesian ideal-observer modeling is an elegant and successful normative approach to understanding human perception. One particular domain in which it has seen much success recently is that of understanding multisensory integration in human perception (see Chapter 1). Existing applications of this modeling approach have frequently focused on a simple special case where the ideal observer’s estimate of an unknown quantity in the environment is a reliability-weighted mean of the individual observed cues. This is all that is needed to understand a wide variety of interesting perceptual phenomena. We argue, however, that the Bayesian-observer approach can be more powerfully and generally applied by clear generative modeling of the perceptual task for each experiment. In other words, this assumes that people have access to a full generativemodel of their observations and that they use this model to make optimal decisions in performing the task. This systematic approach effectively provides a “model for modeling” that has some key advantages: (1) It provides the modeler with a clear framework for modeling new tasks beyond simply applying common normative models— such as linear combination—which may not apply for a new scenario and may fail to explain important aspects of human behavior; (2) Human performance can be measured against these clear “optimal”models such thatwe candraw conclusions about optimality of human perception or reveal architectural limitations of the human perceptual system, which cause it to deviate from optimality. For a particular perceptual task, the optimal solution requires inference in the true generative model of the task. Here, optimal is defined in the sense that the posterior probability over relevant unknowns in the environment is calculated. Any actions or decisions to bemade can then be taken

[1]  Konrad Paul Kording,et al.  The dynamics of memory as a consequence of optimal adaptation to a changing body , 2007, Nature Neuroscience.

[2]  G. M. Redding,et al.  Adaptive spatial alignment and strategic perceptual-motor control. , 1996, Journal of experimental psychology. Human perception and performance.

[3]  J. Krakauer,et al.  Generalization of Motor Learning Depends on the History of Prior Action , 2006, PLoS biology.

[4]  Robert A Jacobs,et al.  Bayesian integration of visual and auditory signals for spatial localization. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.

[5]  Marc O. Ernst,et al.  A Bayesian view on multimodal cue integration , 2006 .

[6]  Philip N. Sabes,et al.  Visual-shift adaptation is composed of separable sensory and task-dependent effects. , 2007, Journal of neurophysiology.

[7]  Reza Shadmehr,et al.  Quantifying Generalization from Trial-by-Trial Behavior of Adaptive Systems that Learn with Basis Functions: Theory and Experiments in Human Motor Control , 2003, The Journal of Neuroscience.

[8]  M. Ernst Learning to integrate arbitrary signals from vision and touch. , 2007, Journal of vision.

[9]  M. Wallace,et al.  Unifying multisensory signals across time and space , 2004, Experimental Brain Research.

[10]  Sethu Vijayakumar,et al.  Multisensory Oddity Detection as Bayesian Inference , 2009, PloS one.

[11]  Sethu Vijayakumar,et al.  Unifying the Sensory and Motor Components of Sensorimotor Adaptation , 2008, NIPS.

[12]  S. Shimojo,et al.  Illusions: What you see is what you hear , 2000, Nature.

[13]  David Heckerman,et al.  Knowledge Representation and Inference in Similarity Networks and Bayesian Multinets , 1996, Artif. Intell..

[14]  Sethu Vijayakumar,et al.  Structure Inference for Bayesian Multisensor Scene Understanding , 2007 .

[15]  D. Wolpert,et al.  When Feeling Is More Important Than Seeing in Sensorimotor Adaptation , 2002, Current Biology.

[16]  James M. Hillis,et al.  Combining Sensory Information: Mandatory Fusion Within, but Not Between, Senses , 2002, Science.

[17]  Jean-Pierre Bresciani,et al.  Vision and touch are automatically integrated for the perception of sequences of events. , 2006, Journal of vision.

[18]  D. Burr,et al.  The Ventriloquist Effect Results from Near-Optimal Bimodal Integration , 2004, Current Biology.

[19]  M. Shiffrar,et al.  Human Body Perception From The Inside Out , 2005 .

[20]  Neil W. Roach,et al.  Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration , 2006, Proceedings of the Royal Society B: Biological Sciences.

[21]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[22]  M. Ernst,et al.  Humans integrate visual and haptic information in a statistically optimal fashion , 2002, Nature.

[23]  Ulrik R Beierholm,et al.  Sound-induced flash illusion as an optimal percept , 2005, Neuroreport.

[24]  Konrad Paul Kording,et al.  Causal Inference in Multisensory Perception , 2007, PloS one.

[25]  M. Wallace,et al.  Visual Localization Ability Influences Cross-Modal Bias , 2003, Journal of Cognitive Neuroscience.

[26]  Neil A. Macmillan,et al.  Detection Theory: A User's Guide , 1991 .

[27]  David J. C. MacKay,et al.  Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.