Using Machine Learning to Detect Cognitive States across Multiple Subjects

Is it feasible to train cross-subject classifiers to decode the cognitive states of human subjects based on functional Magnetic Resonance Imaging (fMRI) data observed over a single time interval? If so, these trained classifiers could be used as virtual sensors to detect cognitive states that apply across multiple human subjects. This problem is relevant to experimental research in cognitive science and to diagnosis of mental processes in patients with brain injuries. The biggest obstacle to training inter-subject classifiers on fMRI data is anatomical variability among subjects. We describe two approaches to overcoming this difficulty. The first approach takes advantage of the anatomically defined Region of Interest (ROI) as a basis for spatially abstracting the data, and the second one transforms the data from different subjects into Talairach-Tournoux coordinates. In particular, we present two fMRI case studies in which we have successfully trained cross-subject classifier to distinguish cognitive states such as (1) whether the human subject is looking at a picture or a sentence describing that picture, and (2) whether the subject is reading an ambiguous or unambiguous sentence.