Music cognition: Bridging computation and insights from cognitive neuroscience

Music cognition: Bridging computation and insights from cognitive neuroscience Marcus Pearce (marcus.pearce@eecs.qmul.ac.uk) Martin Rohrmeier (mr1@mit.edu) Centre for Digital Music and Research Centre in Psychology, Queen Mary, University of London, E1 4NS, UK. MIT Intelligence Initiative, Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridge, MA, USA Psyche Loui (ploui@bidmc.harvard.edu) Edward Large (large@ccs.fau.edu) Ji Chul Kim ( kim@ccs.fau.edu ) Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, USA. Center for Complex Systems & Brain Sciences Florida Atlantic University Petri Toiviainen (petri.toiviainen@jyu.fi) Elvira Brattico ( brattico@mappi.helsinki.fi) University of Jyvaskyla, Finland Aalto University, Finland Keywords: Music cognition; cognitive neuroscience; computational modelling; processing; prediction; grammar Petri Toiviainen and Elvira Brattico Decoding the musical brain during naturalistic listening Goals and Scope In recent years, computational models have become an increasingly important part both of cognitive science and cognitive neuroscience. In tandem with these developments neuroscientific and cognitive investigations of musical experience and behaviour have been gathering pace. In this context, music cognition constitutes a rich and challenging area of cognitive science in which the processing of complex, multi-dimensional temporal sequences can be studied without interference of meaning or semantics (see Pearce & Rohrmeier, 2012, for a review). Because of its complexity and well-defined problem-space, computational modelling of music witnessed a rapid growth of successful higher-order modelling approaches. This workshop investigates computational modelling as a bridge between cognition and the brain, with a focus on understanding the psychological mechanisms involved in perceiving and producing music. Many approaches have been taken to modelling the large variety of different cognitive processes involved in music perception and creation involving various modules of basic structural processing, statistical learning, memory, as well as motor, emotional and social cognitive processes. Recent computational models range from hierarchical, rule-based systems for representing harmonic movement inspired by probabilistic grammars for language, through oscillator based network models for modelling metrical and tonal perception, to probabilistic methods derived from machine learning for modelling dynamic learning and predictive processing of style-specific musical structure. Turning to cognitive neuroscience, recent years have seen increasing interest in advanced computational modelling of EEG and fMRI data used to distinguish brain regions responsible for the processing of different aspects of music (e.g., rhythm, pitch, timbre, harmony) and the functional connectivity between them. The purpose of this symposium is to bring together and display current research trends towards a synthesis of these two research areas linking the parameters and subcomponents of cognitive models of musical processing to functional and anatomical properties of the brain. Encoding, or prediction of neural activation from stimulus, is a common modeling approach in neuroscience. In our recent neuroimaging study, we applied encoding to predict brain activity during listening to different pieces of music from an extensive set of musical features computationally extracted from the pieces, and found widespread brain activation, including auditory, limbic, and motor areas (Alluri et al., Neuroimage, under review). With such complex and distributed neural activation, evaluation of different encoding models is not straightforward, because the goodness of prediction is difficult to assess. Decoding, or prediction of physical or perceived stimulus features from the observed neural activation, has the potential benefit of a more straightforward model evaluation because of easier performance characterization in terms of, for instance, correct classification rate. In a series of experiments, our participants were measured with functional magnetic resonance imaging (fMRI) while they were listening to three different musical pieces. Subsequently, musical features were computationally extracted from the pieces, and continuous emotion ratings were collected from the participants. For decoding, the fMRI data were subjected to dimensionality reduction via voxel selection and spatial subspace projection, and the obtained projections were subsequently regressed against the musical features or the emotion ratings. To avoid overfitting, cross-validation was utilized. Different voxel selection criteria and subspace projection dimensionalities were used to find optimal prediction accuracy. The decoding results and the challenges of the approach will be discussed at the symposium. Psyche Loui Behavioral and DTI Studies on Normal and Impaired Learning of Musical Structure One of the central questions of cognitive science concerns how humans acquire knowledge from exposure to stimuli in the environment. In the context of music, knowledge

[1]  Leonard B. Meyer Meaning in music and information theory. , 1957 .

[2]  Marcus T. Pearce,et al.  Music Cognition and the Cognitive Sciences , 2012, Top. Cogn. Sci..

[3]  S. Koelsch,et al.  Predictive information processing in music cognition. A critical review. , 2012, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.