Cognitive Analysis of Working Memory Load from Eeg, by a Deep Recurrent Neural Network

One of the common modalities for observing mental activity is electroencephalogram (EEG) signals. However, EEG recording is highly susceptible to various sources of noise and to inter subject differences. In order to solve these problems we present a deep recurrent neural network (RNN) architecture to learn robust features and predict the levels of cognitive load from EEG recordings. Using a deep learning approach, we first transform the EEG time series into a sequence of multispectral images which carries spatial information. Next, we train our recurrent hybrid network to learn robust representations from the sequence of frames. The proposed approach preserves spectral, spatial and temporal structures and extracts features which are less sensitive to variations along each dimension. Our results demonstrate cognitive memory load prediction across four different levels with an overall accuracy of 92.5% during the memory task execution and reduce classification error to 7.61% in comparison to other state-of-art techniques.

[1]  Björn W. Schuller,et al.  Bidirectional LSTM Networks for Context-Sensitive Keyword Detection in a Cognitive Virtual Agent Framework , 2010, Cognitive Computation.

[2]  M Congedo,et al.  A review of classification algorithms for EEG-based brain–computer interfaces , 2007, Journal of neural engineering.

[3]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[4]  A. Graves,et al.  Unconstrained Online Handwriting Recognition with Recurrent Neural Networks , 2007 .

[5]  O. Jensen,et al.  Frontal theta activity in humans increases with memory load in a working memory task , 2002, The European journal of neuroscience.

[6]  Vince D. Calhoun,et al.  Deep learning for neuroimaging: a validation study , 2013, Front. Neurosci..

[7]  Mohammed Yeasin,et al.  Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks , 2015, ICLR.

[8]  Peter Alfeld,et al.  A trivariate clough-tocher scheme for tetrahedral data , 1984, Comput. Aided Geom. Des..

[9]  Navdeep Jaitly,et al.  Hybrid speech recognition with Deep Bidirectional LSTM , 2013, 2013 IEEE Workshop on Automatic Speech Recognition and Understanding.

[10]  Thomas Hofmann,et al.  Greedy Layer-Wise Training of Deep Networks , 2007 .

[11]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[12]  Gülerİnan,et al.  Recurrent neural networks employing Lyapunov exponents for EEG signals classification , 2005 .

[13]  Fei-Fei Li,et al.  Large-Scale Video Classification with Convolutional Neural Networks , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Sylvain Arlot,et al.  A survey of cross-validation procedures for model selection , 2009, 0907.4728.

[15]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[16]  Elif Derya Übeyli,et al.  Recurrent neural networks employing Lyapunov exponents for EEG signals classification , 2005, Expert Syst. Appl..

[17]  Geoffrey E. Hinton,et al.  Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[18]  John P. Snyder,et al.  Map Projections: A Working Manual , 2012 .

[19]  F. Paas,et al.  Cognitive Architecture and Instructional Design , 1998 .

[20]  Xiang Zhang,et al.  Character-level Convolutional Networks for Text Classification , 2015, NIPS.