Modularity allows classification of human brain networks during music and speech perception
暂无分享,去创建一个
We investigate the use of modularity as a quantifier of whole-brain functional networks. Brain networks are constructed from functional magnetic resonance imaging while subjects listened to auditory pieces that varied in emotivity and cultural familiarity. The results of our analysis reveal high and low modularity groups based on the network configuration during a subject's favorite song, and this classification can predict network reconfiguration during the other auditory pieces. In particular, subjects in the low modularity group show significant brain network reconfiguration during both familiar and unfamiliar pieces. In contrast, the high modularity brain networks appear more robust and only exhibit significant changes during the unfamiliar music and speech. We also find differences in the stability of module composition for the two groups during each auditory piece. Our results suggest that the modularity of the whole-brain network plays a significant role in the way the network reconfigures during varying auditory processing demands, and it may therefore contribute to individual differences in neuroplasticity capability during therapeutic music engagement.
[1] Tsuyoshi Murata,et al. {m , 1934, ACML.
[2] Michael W. Vannier,et al. International Journal for Computer Assisted Radiology and Surgery , 2006, International Journal of Computer Assisted Radiology and Surgery.
[3] R. Turner,et al. Microstructural Parcellation of the Human Cerebral Cortex: From Brodmann's Post-Mortem Map to in Vivo Mapping with High-Field Magnetic Resonance Imaging , 2013 .
[4] J. Herskowitz,et al. Proceedings of the National Academy of Sciences, USA , 1996, Current Biology.