Hebbian Learning Mechanisms Help Explain the Maturation of Multisensory Speech Integration in Children with Autism Spectrum Disorder (ASD) and with Typical Development (TD): a Neurocomputational Analysis

Cognitive tasks such as communication and speech comprehension rely on the brain’s ability to exploit and integrate sensory information of different modalities. Accordingly, the appropriate development of multisensory speech integration (MSI) greatly influences a child’s ability to successfully relate with others. Several experimental findings have shown that speech intelligibility is affected by visualizing a speaker’s articulations, and that MSI continues developing late into childhood. This work aims at developing a network to analyze the role of the sensory experience during the early stages of life, as a mechanism responsible for the maturation of these integrative abilities in teenagers. We extended a model realized to study multisensory integration in cortical regions (Magosso et al., 2012; Cuppini et al, 2014) by incorporating a multisensory area known to be involved in audiovisual speech processing, the superior temporal sulcus (STS). The model suggests that the maturation of MSI is primarily due to the maturation of direct connections among primary unisensory regions. This process was the results of a training phase during which the network was exposed to sensory-specific and cross-sensory stimuli, and excitatory projections among the unisensory regions of the model were subjected to Hebbian rules of potentiation and depression. With such a model, we also analyzed the acquisition of adult MSI abilities in ASD children, and we were able to explain the delayed maturation as result of a lower level of multisensory exposures during early phases of life.

[1]  Mauro Ursino,et al.  A neural network for learning the meaning of objects and words from a featural representation , 2015, Neural Networks.

[2]  Wei Ji Ma,et al.  Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space , 2009, PloS one.

[3]  Michael S. Beauchamp,et al.  A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion , 2012, NeuroImage.

[4]  J. Vroomen,et al.  No evidence for impaired multisensory integration of low-level audiovisual stimuli in adolescents and young adults with autism spectrum disorders , 2013, Neuropsychologia.

[5]  T G Nicol,et al.  Speech-sound discrimination in school-age children: psychophysical and neurophysiologic measures. , 1999, Journal of speech, language, and hearing research : JSLHR.

[6]  Ben H. Jansen,et al.  Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns , 1995, Biological Cybernetics.

[7]  John J. Foxe,et al.  The development of audiovisual multisensory integration across childhood and early adolescence: a high-density electrical mapping study. , 2011, Cerebral cortex.

[8]  Mauro Ursino,et al.  A Computational Model of the Lexical-Semantic System Based on a Grounded Cognition Approach , 2010, Front. Psychology.

[9]  Mauro Ursino,et al.  Neurocomputational approaches to modelling multisensory integration in the brain: A review , 2014, Neural Networks.

[10]  Thomas J. Anastasio,et al.  Using Bayes' Rule to Model Multisensory Enhancement in the Superior Colliculus , 2000, Neural Computation.

[11]  D. Knill,et al.  The Bayesian brain: the role of uncertainty in neural coding and computation , 2004, Trends in Neurosciences.

[12]  Nadia Bolognini,et al.  TMS modulation of visual and auditory processing in the posterior parietal cortex , 2009, Experimental Brain Research.

[13]  E. Liebenthal,et al.  Neural pathways for visual speech perception , 2014, Front. Neurosci..

[14]  Nadia Bolognini,et al.  A neurocomputational analysis of the sound-induced flash illusion , 2014, NeuroImage.

[15]  Mauro Ursino,et al.  A Neural Network Model of Ventriloquism Effect and Aftereffect , 2012, PloS one.

[16]  John J. Foxe,et al.  Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study. , 2002, Brain research. Cognitive brain research.

[17]  James L. McClelland,et al.  Structure and deterioration of semantic memory: a neuropsychological and computational investigation. , 2004, Psychological review.

[18]  Mauro Ursino,et al.  Organization, Maturation, and Plasticity of Multisensory Integration: Insights from Computational Modeling Studies , 2011, Front. Psychology.

[19]  S. Trehub,et al.  Children's perception of speech in multitalker babble. , 2000, The Journal of the Acoustical Society of America.

[20]  Thomas J. Anastasio,et al.  Modeling Cross-Modal Enhancement and Modality-Specific Suppression in Multisensory Neurons , 2003, Neural Computation.

[21]  H. McGurk,et al.  Hearing lips and seeing voices , 1976, Nature.

[22]  Gerry Leisman,et al.  Autistic Spectrum Disorders as Functional Disconnection Syndrome , 2009, Reviews in the neurosciences.

[23]  Konrad Paul Kording,et al.  Causal Inference in Multisensory Perception , 2007, PloS one.

[24]  Wulfram Gerstner,et al.  Mathematical formulations of Hebbian learning , 2002, Biological Cybernetics.

[25]  Mauro Ursino,et al.  Multisensory integration in the superior colliculus: a neural network model , 2009, Journal of Computational Neuroscience.

[26]  John J. Foxe,et al.  Severe multisensory speech integration deficits in high-functioning school-aged children with Autism Spectrum Disorder (ASD) and their resolution during early adolescence. , 2015, Cerebral cortex.

[27]  Manuel R. Mercier,et al.  Mapping phonemic processing zones along human perisylvian cortex: an electro-corticographic investigation , 2013, Brain Structure and Function.