Decoding brain cognitive activity across subjects using multimodal M/EEG neuroimaging
暂无分享,去创建一个
Brain decoding is essential in understanding where and how information is encoded inside the brain. Existing literature has shown that a good classification accuracy is achievable in decoding for single subjects, but multi-subject classification has proven difficult due to the inter-subject variability. In this paper, multi-modal neuroimaging was used to improve two-class multi-subject classification accuracy in a cognitive task of differentiating between a face and a scrambled face. In this transfer learning problem, a feature space based on special-form covariance matrices manipulated with riemannian geometry are used. A supervised two-layer hierarchical model was trained iteratively for estimating classification accuracies. Results are reported on a publically available multi-subject, multi-modal human neuroimaging dataset from MRC Cognition and Brain Sciences Unit, University of Cambridge. The dataset contains simultaneous recordings of electroencephalography (EEG) and magnetoencephalography (MEG). Our model attained, using leave-one-subject-out cross-validation, a classification accuracy of 70.82% for single modal EEG, 81.55% for single modal MEG and 84.98% for multi-modal M/EEG.
[1] Richard N Henson,et al. A multi-subject, multi-modal human neuroimaging dataset , 2015, Scientific Data.
[2] Paolo Avesani,et al. MEG decoding across subjects , 2014, 2014 International Workshop on Pattern Recognition in Neuroimaging.
[3] Taghi M. Khoshgoftaar,et al. A survey of transfer learning , 2016, Journal of Big Data.