What Silly Postures Tell Us about the Brain

Because it looks funny and is pretty useless, we do not frequently touch our shoulder with our ear. It turns out that this insight has led to an exciting discovery in the field of cue combination. To interpret our senses, we rely on other senses. For example, to know how the image on our retina relates to the outside world, we need to know the alignment of our eyeball with respect to the outside world. To know this alignment, in turn, we need to know the orientation of our head relative to the outside world. Similar problems occur in other sensory and motor contexts. For example, the perceived direction of sound depends on the orientation of our head and movements of our body rely on the alignment of our body relative to the outside world. One way of formalizing these problems uses the idea of a coordinate system: every sensor provides information in a particular coordinates (e.g., retinal coordinates for visual input or joint angle coordinates for proprioceptive information). The fact that sensory information arrives in many different coordinate systems produces a challenge for the brain: how to combine information that is encoded in different coordinate systems for successful movements. Both behavioral (Soechting and Flanders, 1989; van Beers et al., 1999; Sober and Sabes, 2003) and neural recording (Wallace et al., 1998; Fetsch et al., 2007) studies have explored how the brain “transforms” information across coordinate systems. It is generally understood that information degrades as it is transformed across coordinate systems. Another important fact about information integration is that the brain must make all of its decisions based on unreliable and sometimes conflicting sensory data. Crucially, sensory uncertainty can be reduced by combining information from multiple sensors. Any combination needs to assign weights to different inputs. Should each sensor have the same importance or are some sensors more important than others? Based on studies showing that the brain often weights sensory inputs in a near-optimal fashion (Yuille and Kersten, 2006; Trommershauser et al., 2010), it is generally understood that the brain is exquisitely sensitive to statistical issues that occur when combining information. That coordinate transforms are costly and sensory information is uncertain sets the stage for a remarkably interesting problem. Tasks require precision in the relevant coordinate system. Imprecise sensory information arrives from multiple sources. How should the brain combine information from multiple sources to move successfully? One intuition is that information that requires less coordinate transformation should be trusted more than information that requires more transformation – it should get more weight. The current study by Blohm and Burns (2010) tests a crucial prediction of this intuition: if one sensor is made more difficult to transform into the required coordinate system then it should be assigned a lower weight. The authors begin with an established cue-conflict paradigm that allows the experimenter to measure the weight subjects put on vision versus proprioception (Sober and Sabes, 2003, 2005). Then, in a variant of the experiment, the subjects performed the task while holding their heads at a 30° angle. The rationale was that in this very unusual posture, we should have more uncertainty about the alignment of the head-centric and the body-centric coordinate systems. If this increases the difficulty of coordinate transformations then subjects should show less reliance on signals that must be transformed across coordinate systems. The authors beautifully show that changing the difficulty of the transformation affects weights. For example, in the experiment, proprioceptive information about the hand relative to the body should be independent of head tilt. Visual information about the same variable, on the other hand, should be affected by the tilt as it requires converting information from retinal to joint coordinates. Indeed, the authors find that head tilt leads to smaller visual weights (see Figure 8A, light blue line). However, at some level the paper raises as many questions as it answers, since there seem to be two completely independent (but not mutually exclusive) interpretations of the results. One interpretation, provided by the authors, is that the precision of our sense of head rotation is probably good when the head is held normally and less good when the head is rotated. This should be true due to the Weber-law properties of many sensors, which can result from the signal-dependent noise that has been characterized in the relevant orientation sensors. In this case, it is uncertainty about the relative orientation of the head to the body that gives rise on uncertainty about the alignment of the coordinate systems (Kording and Tenenbaum, 2006). In this interpretation transformed signals are given less weight during head rotation simply because our position sense from the eyes is made less reliable because of uncertainty about the head position. Another interpretation is that the precision of computations in the nervous system should be better for coordinate transformations that happen more often. As the head is usually roughly at a 90° angle relative to the shoulders, the nervous system has more experience and might devotes more neural resources to the associated coordinate transformation, independent of any signal-dependent noise arising from peripheral sensors. In this interpretation transformed signals are given less weight because the brain avoids the neural noise associated with performing a less-familiar version of the transformation. The paper thus opens up a new set of questions that should also be asked about previous studies (Sober and Sabes, 2003, 2005). How much of cross-modal weighting is determined by neural noise in coordinate transfers and how much is due to uncertainty about the coordinate system alignments? Future variants of the experiment could give additional cues about the relative alignment of the head or remove such cues. Such a manipulation should have no effect if neural noise dominates and should have a strong effect if alignment uncertainty dominates.

[1]  Satoshi Hirose,et al.  Activity in posterior parietal cortex mediates the visual dominance over kinesthesia , 2007, Neuroscience Research.

[2]  Steve W. C. Chang,et al.  Using a Compound Gain Field to Compute a Reach Plan , 2009, Neuron.

[3]  D. Knill,et al.  The Bayesian brain: the role of uncertainty in neural coding and computation , 2004, Trends in Neurosciences.

[4]  S. Lechner-Steinleitner,et al.  Interaction of labyrinthine and somatoreceptor inputs as determinants of the subjective vertical , 1978, Psychological research.

[5]  Lacquaniti,et al.  Visuo‐motor transformations for arm reaching , 1998, The European journal of neuroscience.

[6]  D. McCloskey,et al.  Joint sense, muscle sense, and their combination as position sense, measured at the distal interphalangeal joint of the middle finger. , 1976, The Journal of physiology.

[7]  D. Burr,et al.  Auditory dominance over vision in the perception of interval duration , 2009, Experimental Brain Research.

[8]  D. Angelaki,et al.  Multisensory integration: resolving sensory ambiguities to build novel representations , 2010, Current Opinion in Neurobiology.

[9]  Konrad Paul Kording,et al.  Sensory Cue Integration , 2011 .

[10]  Joshua B. Tenenbaum,et al.  Causal inference in sensorimotor integration , 2006, NIPS.

[11]  F. Lacquaniti,et al.  Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space. , 1997, Journal of neurophysiology.

[12]  J. F. Soechting,et al.  Oculocentric frames of reference for limb movement. , 2002, Archives italiennes de biologie.

[13]  W. A. Fletcher,et al.  Eye position signals in human saccadic processing , 2004, Experimental Brain Research.

[14]  Gunnar Blohm,et al.  Multi-Sensory Weights Depend on Contextual Noise in Reference Frame Transformations , 2010, Front. Hum. Neurosci..

[15]  Gunnar Blohm,et al.  Computations for geometrically accurate visually guided reaching in 3-D space. , 2007, Journal of vision.

[16]  M. Wallace,et al.  Multisensory integration in the superior colliculus of the alert cat. , 1998, Journal of neurophysiology.

[17]  A. Yuille,et al.  Object perception as Bayesian inference. , 2004, Annual review of psychology.

[18]  J D Crawford,et al.  Proprioceptive guidance of saccades in eye-hand coordination. , 2006, Journal of neurophysiology.

[19]  Denise Taylor,et al.  Head and Neck Position Sense , 2008, Sports medicine.

[20]  C. Prablanc,et al.  Vectorial coding of movement: vision, proprioception, or both? , 1995, Journal of neurophysiology.

[21]  I. Curthoys,et al.  The Effect of Ocular Torsional Position on Perception of the Roll-tilt of Visual Stimuli , 1997, Vision Research.

[22]  D. Burke,et al.  The responses of human muscle spindle endings to vibration during isometric contraction. , 1976, The Journal of physiology.

[23]  Gunnar Blohm,et al.  Decoding the cortical transformations for visually guided reaching in 3D space. , 2009, Cerebral cortex.

[24]  G E Loeb,et al.  The computation of position sense from spindles in mono- and multiarticular muscles , 1994, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[25]  R. Andersen,et al.  The posterior parietal cortex: Sensorimotor interface for the planning and online control of visually guided movements , 2006, Neuropsychologia.

[26]  D. Wolpert,et al.  When Feeling Is More Important Than Seeing in Sensorimotor Adaptation , 2002, Current Biology.

[27]  Philip N. Sabes,et al.  Multisensory Integration during Motor Planning , 2003, The Journal of Neuroscience.

[28]  Michael I. Jordan,et al.  A Model of the Learning of Arm Trajectories from Spatial Deviations , 1994, Journal of Cognitive Neuroscience.

[29]  H. Collewijn,et al.  Human ocular counterroll: assessment of static and dynamic properties from electromagnetic scleral coil recordings , 2004, Experimental Brain Research.

[30]  Sidney S. Simon,et al.  Merging of the Senses , 2008, Front. Neurosci..

[31]  V. Henn,et al.  Static roll and pitch in the monkey: Shift and rotation of listing's plane , 1992, Vision Research.

[32]  F. J. Clark,et al.  Slowly adapting receptors in cat knee joint: can they signal joint angle? , 1975, Journal of neurophysiology.

[33]  Philip N. Sabes,et al.  Flexible strategies for sensory integration during motor planning , 2005, Nature Neuroscience.

[34]  H. Bülthoff,et al.  Merging the senses into a robust percept , 2004, Trends in Cognitive Sciences.

[35]  Dora E Angelaki,et al.  Spatial Reference Frames of Visual, Vestibular, and Multimodal Heading Signals in the Dorsal Subdivision of the Medial Superior Temporal Area , 2007, The Journal of Neuroscience.

[36]  M S Landy,et al.  Ideal cue combination for localizing texture-defined edges. , 2001, Journal of the Optical Society of America. A, Optics, image science, and vision.

[37]  R. J. van Beers,et al.  Integration of proprioceptive and visual position-information: An experimentally supported model. , 1999, Journal of neurophysiology.

[38]  M. Landy,et al.  Measurement and modeling of depth cue combination: in defense of weak fusion , 1995, Vision Research.

[39]  W. Abend,et al.  Response to static tilts of peripheral neurons innervating otolith organs of the squirrel monkey. , 1972, Journal of neurophysiology.

[40]  Michael I. Jordan,et al.  Forward Models: Supervised Learning with a Distal Teacher , 1992, Cogn. Sci..

[41]  T. Stanford,et al.  Multisensory integration: current issues from the perspective of the single neuron , 2008, Nature Reviews Neuroscience.

[42]  Sabine M P Verschueren,et al.  Position sensitivity of human muscle spindles: single afferent and population representations. , 2002, Journal of neurophysiology.

[43]  A. Pouget,et al.  Efficient computation and cue integration with noisy population codes , 2001, Nature Neuroscience.

[44]  R. Andersen,et al.  The influence of the angle of gaze upon the excitability of the light- sensitive neurons of the posterior parietal cortex , 1983, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[45]  M. Ernst,et al.  Humans integrate visual and haptic information in a statistically optimal fashion , 2002, Nature.

[46]  M. Mon-Williams,et al.  Synaesthesia in the normal limb , 1997, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[47]  J. F. Soechting,et al.  Errors in pointing are due to approximations in sensorimotor transformations. , 1989, Journal of neurophysiology.

[48]  Yale E. Cohen,et al.  A common reference frame for movement plans in the posterior parietal cortex , 2002, Nature Reviews Neuroscience.

[49]  R. Jacobs,et al.  Experience-dependent visual cue integration based on consistencies between visual and haptic percepts , 2001, Vision Research.

[50]  M. Chacron,et al.  Neural Variability, Detection Thresholds, and Information Transmission in the Vestibular System , 2007, Journal of Neuroscience.

[51]  Demetri Terzopoulos,et al.  Heads up!: biomechanical modeling and neuromuscular control of the neck , 2006, ACM Trans. Graph..

[52]  Wei Ji Ma,et al.  Bayesian inference with probabilistic population codes , 2006, Nature Neuroscience.

[53]  D. McCloskey,et al.  The contribution of muscle afferents to kinaesthesia shown by vibration induced illusions of movement and by the effects of paralysing joint afferents. , 1972, Brain : a journal of neurology.

[54]  L. Harris,et al.  The subjective visual vertical and the perceptual upright , 2006, Experimental Brain Research.

[55]  J. V. Van Gisbergen,et al.  Properties of the internal representation of gravity inferred from spatial-direction and body-tilt estimates. , 2000, Journal of neurophysiology.

[56]  B. Edin,et al.  Muscle afferent responses to isometric contractions and relaxations in humans. , 1990, Journal of neurophysiology.

[57]  L. Pinneo On noise in the nervous system. , 1966, Psychological review.

[58]  W Li,et al.  Visual Direction Is Corrected by a Hybrid Extraretinal Eye Position Signal a , 1992, Annals of the New York Academy of Sciences.

[59]  J. D. Crawford,et al.  Comparing limb proprioception and oculomotor signals during hand-guided saccades , 2007, Experimental Brain Research.

[60]  Philip N. Sabes,et al.  Sensory transformations and the use of multiple reference frames for reach planning , 2009, Nature Neuroscience.

[61]  A. Yuille,et al.  Opinion TRENDS in Cognitive Sciences Vol.10 No.7 July 2006 Special Issue: Probabilistic models of cognition Vision as Bayesian inference: analysis by synthesis? , 2022 .