Affective Video Events Summarization Using EMD Decomposed EEG Signals (EDES)

Video Summarization is a computer-based technique to generate a shorter version of the original long video for memory management and information retrieval. The existing method for video summarization applies affective (the state of excitement, interestingness, and panic) level of a viewer captured by Electroencephalography (EEG) while watching a video. The traditional methods for extracting features from EEG signals employ Discrete Fourier Transform (DFT). However, DFT is mainly suitable for stationary signals. As EEG signals are non-linear and non-stationary in nature, DFT is not appropriate for EEG signals. In addition to this, the high-frequency components of EEG signals usually contain the distinct properties than the other frequency components. Empirical Mode Decomposition (EMD) technique extracts the different frequency components from high to low from non-linear and non-stationary signals, such EEG. Therefore, we propose a new video summarization method applying EMD Decomposed EEG Signals (EDES) to extract the high-frequency components. The proposed approach calculates Power Spectral Density (PSD) from the high-frequency components and generates a neuronal attention curve for a video. Finally, a video summary is produced by selecting the affective video events from the neuronal attention curve. The experimental results reveal that the proposed approach performs better than the existing state- of-the-art method.

[1]  Shaohui Mei,et al.  Video summarization via minimum sparse reconstruction , 2015, Pattern Recognit..

[2]  Ram Bilas Pachori,et al.  Classification of Seizure and Nonseizure EEG Signals Using Empirical Mode Decomposition , 2012, IEEE Transactions on Information Technology in Biomedicine.

[3]  A. Savitzky,et al.  Smoothing and Differentiation of Data by Simplified Least Squares Procedures. , 1964 .

[4]  Xiang Ji,et al.  Arousal Recognition Using Audio-Visual Features and FMRI-Based Brain Response , 2015, IEEE Transactions on Affective Computing.

[5]  Amaury Lendasse,et al.  Advances in extreme learning machines (ELM2014) , 2011, Neurocomputing.

[6]  Qionghai Dai,et al.  Structuring Lecture Videos by Automatic Projection Screen Localization and Analysis , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Chinh T. Dang,et al.  Heterogeneity Image Patch Index and Its Application to Consumer Video Summarization , 2014, IEEE Transactions on Image Processing.

[8]  Chinh T. Dang,et al.  RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis , 2014, IEEE Transactions on Image Processing.

[9]  George Ghinea,et al.  What do you wish to see? A summarization system for movies based on user preferences , 2015, Inf. Process. Manag..

[10]  A. Walker Electroencephalography, Basic Principles, Clinical Applications and Related Fields , 1982 .

[11]  Manoranjan Paul,et al.  Epileptic seizure detection by analyzing EEG signals using different transformation techniques , 2014, Neurocomputing.

[12]  Jian Pei,et al.  Data Mining: Concepts and Techniques, 3rd edition , 2006 .

[13]  Sung Wook Baik,et al.  Divide-and-conquer based summarization framework for extracting affective video content , 2016, Neurocomputing.

[14]  Jiebo Luo,et al.  Adaptive Greedy Dictionary Selection for Web Media Summarization , 2017, IEEE Transactions on Image Processing.

[15]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[16]  Yi Yang,et al.  DevNet: A Deep Event Network for multimedia event detection and evidence recounting , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Bin Zhao,et al.  Quasi Real-Time Summarization for Consumer Videos , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Junseok Kwon,et al.  A unified framework for event summarization and rare event detection , 2012, CVPR.