Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity

Visual attention estimation is an active field of research at the crossroads of different disciplines: computer vision, artificial intelligence and medicine. One of the most common approaches to estimate a saliency map representing attention is based on the observed images. In this paper, we show that visual attention can be retrieved from EEG acquisition. The results are comparable to traditional predictions from observed images, which is of great interest. For this purpose, a set of signals has been recorded and different models have been developed to study the relationship between visual attention and brain activity. The results are encouraging and comparable with other approaches estimating attention with other modalities. The codes and dataset considered in this paper have been made available at https://figshare.com/s/3e353bd1c621962888ad to promote research in the field.

[1]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[2]  Bao-Liang Lu,et al.  A multimodal approach to estimating vigilance using EEG and forehead EOG , 2016, Journal of neural engineering.

[3]  Mubarak Shah,et al.  Generative Adversarial Networks Conditioned by Brain Signals , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[4]  Nicolas Riche,et al.  Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics , 2013, 2013 IEEE International Conference on Computer Vision.

[5]  Robert Oostenveld,et al.  The five percent electrode system for high-resolution EEG and ERP measurements , 2001, Clinical Neurophysiology.

[6]  Ian J. Goodfellow,et al.  NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.

[7]  Frédo Durand,et al.  A Benchmark of Computational Models of Saliency to Predict Human Fixations , 2012 .

[8]  Shin Ishii,et al.  Characterization of electroencephalography signals for estimating saliency features in videos , 2018, Neural Networks.

[9]  Mohammed Yeasin,et al.  Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks , 2015, ICLR.

[10]  M Congedo,et al.  A review of classification algorithms for EEG-based brain–computer interfaces: a 10 year update , 2018, Journal of neural engineering.

[11]  J. Duncan,et al.  Competitive brain activity in visual attention , 1997, Current Opinion in Neurobiology.

[12]  Matthias Bethge,et al.  Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics , 2017, ECCV.

[13]  G. Curio,et al.  Temporal Signatures of Criticality in Human Cortical Excitability as Probed by Early Somatosensory Responses , 2019, The Journal of Neuroscience.

[14]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Stephen M. Gordon,et al.  EEGNet: A Compact Convolutional Neural Network for EEG-based Brain-Computer Interfaces , 2021 .

[16]  Asha Iyer,et al.  Components of bottom-up gaze allocation in natural images , 2005, Vision Research.

[17]  Frédo Durand,et al.  What Do Different Evaluation Metrics Tell Us About Saliency Models? , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Peyman Milanfar,et al.  Nonparametric bottom-up saliency detection by self-resemblance , 2009, 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[19]  Chin-Teng Lin,et al.  Multi-channel EEG recordings during a sustained-attention driving task , 2018, Scientific Data.

[20]  Noel E. O'Connor,et al.  SalGAN: Visual Saliency Prediction with Generative Adversarial Networks , 2017, ArXiv.

[21]  Rainer Goebel,et al.  Contextual Encoder-Decoder Network for Visual Saliency Prediction , 2019, Neural Networks.

[22]  Chunyan Miao,et al.  EEG-Based Emotion Recognition Using Regularized Graph Neural Networks , 2019, IEEE Transactions on Affective Computing.

[23]  Guang-Zhong Yang,et al.  Deep Learning for Health Informatics , 2017, IEEE Journal of Biomedical and Health Informatics.

[24]  Joseph H. Goldberg,et al.  Identifying fixations and saccades in eye-tracking protocols , 2000, ETRA.

[25]  Mubarak Shah,et al.  ThoughtViz: Visualizing Human Thoughts Using Generative Adversarial Network , 2018, ACM Multimedia.

[26]  Lei Wang,et al.  A Novel Bi-Hemispheric Discrepancy Model for EEG Emotion Recognition , 2019, IEEE Transactions on Cognitive and Developmental Systems.