Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

How meaning is represented in the brain is still one of the big open questions in neuroscience. Does a word (e.g., bird) always have the same representation, or does the task under which the word is processed alter its representation (answering "can you eat it?" versus "can it fly?")? The brain activity of subjects who read the same word while performing different semantic tasks has been shown to differ across tasks. However, it is still not understood how the task itself contributes to this difference. In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns. We investigate the effect of the task (i.e. the question being asked) on the processing of the concrete noun by predicting the millisecond-resolution MEG recordings as a function of both the semantics of the noun and the task. Using this approach, we test several hypotheses about the task-stimulus interactions by comparing the zero-shot predictions made by these hypotheses for novel tasks and nouns not seen during training. We find that incorporating the task semantics significantly improves the prediction of MEG recordings, across participants. The improvement occurs 475-550ms after the participants first see the word, which corresponds to what is considered to be the ending time of semantic processing for a word. These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.

[1]  N. Kanwisher,et al.  Visual attention: Insights from brain imaging , 2000, Nature Reviews Neuroscience.

[2]  Radoslaw Martin Cichy,et al.  The representational dynamics of task and object processing in humans , 2018, eLife.

[3]  J. Gallant,et al.  Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies , 2011, Current Biology.

[4]  Alexander G. Huth,et al.  Incorporating Context into Language Encoding Models for fMRI , 2018, bioRxiv.

[5]  D. Heeger,et al.  Categorical Clustering of the Neural Representation of Color , 2013, The Journal of Neuroscience.

[6]  Samuel A. Nastase,et al.  Attention Selectively Reshapes the Geometry of Distributed Semantic Representation , 2016, bioRxiv.

[7]  J. Duncan Attention, intelligence, and the frontal lobes. , 1995 .

[8]  William W. Graves,et al.  Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. , 2009, Cerebral cortex.

[9]  Donald T Stuss,et al.  Frontal lobes and attention: Processes and networks, fractionation and integration , 2006, Journal of the International Neuropsychological Society.

[10]  Thomas L. Griffiths,et al.  Supplementary Information for Natural Speech Reveals the Semantic Maps That Tile Human Cerebral Cortex , 2022 .

[11]  Yi Zeng,et al.  Representational similarity analysis reveals task-dependent semantic influence of the visual word form area , 2018, Scientific Reports.

[12]  Leila Wehbe,et al.  Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain) , 2019, NeurIPS.

[13]  Gustavo Sudre,et al.  Characterizing the Spatiotemporal Neural Representation of Concrete Nouns Across Paradigms , 2015 .

[14]  Tom Michael Mitchell,et al.  Predicting Human Brain Activity Associated with the Meanings of Nouns , 2008, Science.

[15]  Y. Benjamini,et al.  Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .

[16]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[17]  Peter Hagoort,et al.  MUC (Memory, Unification, Control) and beyond , 2013, Front. Psychol..

[18]  C. Büchel,et al.  Effect of language task demands on the neural response during lexical access: a functional magnetic resonance imaging study , 2013, Brain and Behavior.

[19]  D. Poeppel,et al.  Neural basis of speech perception. , 2015, Handbook of clinical neurology.

[20]  R. Salmelin,et al.  Distinct time courses of word and context comprehension in the left temporal cortex. , 1998, Brain : a journal of neurology.

[21]  R'emi Louf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[22]  M. Kiefer,et al.  Perceptual and semantic sources of category-specific effects: Event-related potentials during picture and word categorization , 2001, Memory & cognition.

[23]  Nikolaus Kriegeskorte,et al.  Frontiers in Systems Neuroscience Systems Neuroscience , 2022 .

[24]  James L. McClelland,et al.  On the control of automatic processes: a parallel distributed processing account of the Stroop effect. , 1990, Psychological review.

[25]  Lance J. Rips,et al.  Structure and process in semantic memory: A featural model for semantic decisions. , 1974 .

[26]  S. Taulu,et al.  Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements , 2006, Physics in medicine and biology.

[27]  Peter Hagoort,et al.  The meaning-making mechanism(s) behind the eyes and between the ears , 2019, Philosophical Transactions of the Royal Society B.

[28]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[29]  Riitta Salmelin,et al.  Tracking neural coding of perceptual and semantic features of concrete nouns , 2012, NeuroImage.

[30]  Martin Luessi,et al.  MEG and EEG data analysis with MNE-Python , 2013, Front. Neuroinform..

[31]  Alexander G. Huth,et al.  Attention During Natural Vision Warps Semantic Representation Across the Human Brain , 2013, Nature Neuroscience.

[32]  Tom M. Mitchell,et al.  Aligning context-based statistical models of language with brain activity during reading , 2014, EMNLP.

[33]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[34]  Yejin Choi,et al.  Do Neural Language Representations Learn Physical Commonsense? , 2019, CogSci.

[35]  Angela D. Friederici,et al.  The ontogeny of the cortical language network , 2016, Nature Reviews Neuroscience.

[36]  Jungo Kasai,et al.  Understanding Commonsense Inference Aptitude of Deep Contextual Representations , 2019, Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing.

[37]  Tom M. Mitchell,et al.  Documents and Dependencies: an Exploration of Vector Space Models for Semantic Composition , 2013, CoNLL.

[38]  Jia-Hong Gao,et al.  Doctor, Teacher, and Stethoscope: Neural Representation of Different Types of Semantic Relations , 2018, The Journal of Neuroscience.

[39]  Phil Blunsom,et al.  Teaching Machines to Read and Comprehend , 2015, NIPS.