Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding

Neural time-series data contain a wide variety of prototypical signal waveforms (atoms) that are of significant importance in clinical and cognitive research. One of the goals for analyzing such data is hence to extract such 'shift-invariant' atoms. Even though some success has been reported with existing algorithms, they are limited in applicability due to their heuristic nature. Moreover, they are often vulnerable to artifacts and impulsive noise, which are typically present in raw neural recordings. In this study, we address these issues and propose a novel probabilistic convolutional sparse coding (CSC) model for learning shift-invariant atoms from raw neural signals containing potentially severe artifacts. In the core of our model, which we call $\alpha$CSC, lies a family of heavy-tailed distributions called $\alpha$-stable distributions. We develop a novel, computationally efficient Monte Carlo expectation-maximization algorithm for inference. The maximization step boils down to a weighted CSC problem, for which we develop a computationally efficient optimization algorithm. Our results show that the proposed algorithm achieves state-of-the-art convergence speeds. Besides, $\alpha$CSC is significantly more robust to artifacts when compared to three competing algorithms: it can extract spike bursts, oscillations, and even reveal more subtle phenomena such as cross-frequency coupling when applied to noisy neural time series.

[1]  Y-Lan Boureau,et al.  Learning Convolutional Feature Hierarchies for Visual Recognition , 2010, NIPS.

[2]  Théodore Papadopoulo,et al.  Adaptive Waveform Learning: A Framework for Modeling Variability in Neurophysiological Signals , 2017, IEEE Transactions on Signal Processing.

[3]  Brendt Wohlberg,et al.  Efficient Algorithms for Convolutional Sparse Representations , 2016, IEEE Transactions on Image Processing.

[4]  B. Ripley,et al.  Robust Statistics , 2018, Encyclopedia of Mathematical Geosciences.

[5]  Eric Moulines,et al.  Subspace methods for the blind identification of multichannel FIR filters , 1994, Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing.

[6]  R. Gomory,et al.  Fractals and Scaling in Finance: Discontinuity, Concentration, Risk. Selecta Volume E , 1997 .

[7]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[8]  Austin J. Brockmeier,et al.  Learning Recurrent Waveforms Within EEGs , 2016, IEEE Transactions on Biomedical Engineering.

[9]  Prateek Jain,et al.  Learning Sparsely Used Overcomplete Dictionaries , 2014, COLT.

[10]  Pierre Vandergheynst,et al.  MoTIF: An Efficient Algorithm for Learning Translation Invariant Dictionaries , 2006, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.

[11]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[12]  Brendt Wohlberg,et al.  Efficient convolutional sparse coding , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  S. Godsill,et al.  Bayesian inference for time series with heavy-tailed symmetric α-stable noise processes , 1999 .

[14]  Yiwen Wang,et al.  Delving into α-stable distribution in noise suppression for seizure detection from scalp EEG , 2016, Journal of neural engineering.

[15]  S. Chib,et al.  Understanding the Metropolis-Hastings Algorithm , 1995 .

[16]  Antoine Liutkus,et al.  Alpha-stable multichannel audio source separation , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[17]  H. Eichenbaum,et al.  Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. , 2010, Journal of neurophysiology.

[18]  S. Cole,et al.  Brain Oscillations and the Importance of Waveform Shape , 2017, Trends in Cognitive Sciences.

[19]  Joseph E LeDoux,et al.  Updating temporal expectancy of an aversive event engages striatal plasticity under amygdala control , 2017, Nature Communications.

[20]  Eric Moulines,et al.  Subspace methods for the blind identification of multichannel FIR filters , 1995, IEEE Trans. Signal Process..

[21]  O. Jensen,et al.  Cross-frequency coupling between neuronal oscillations , 2007, Trends in Cognitive Sciences.

[22]  Guillermo Sapiro,et al.  Online Learning for Matrix Factorization and Sparse Coding , 2009, J. Mach. Learn. Res..

[23]  S. Jones When brain rhythms aren't ‘rhythmic’: implication for their mechanisms and meaning , 2016, Current Opinion in Neurobiology.

[24]  C. Mallows,et al.  A Method for Simulating Stable Random Variables , 1976 .

[25]  Cédric Gouy-Pailler,et al.  Multivariate temporal dictionary learning for EEG , 2013, Journal of Neuroscience Methods.

[26]  Adam M. Packer,et al.  Extracting regions of interest from biological images with convolutional sparse block coding , 2013, NIPS.

[27]  Jorge Nocedal,et al.  A Limited Memory Algorithm for Bound Constrained Optimization , 1995, SIAM J. Sci. Comput..

[28]  Antoine Liutkus,et al.  Alpha-Stable Matrix Factorization , 2015, IEEE Signal Processing Letters.

[29]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[30]  Gordon Wetzstein,et al.  Fast and flexible convolutional sparse coding , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Michael X Cohen,et al.  Analyzing Neural Time Series Data: Theory and Practice , 2014 .

[32]  O. Jensen,et al.  Asymmetric Amplitude Modulations of Brain Oscillations Generate Slow Evoked Responses , 2008, The Journal of Neuroscience.

[33]  Graham W. Taylor,et al.  Deconvolutional networks , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[34]  Riitta Hari,et al.  MEG–EEG Primer , 2017 .

[35]  Tim Hesterberg,et al.  Monte Carlo Strategies in Scientific Computing , 2002, Technometrics.

[36]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[37]  Filip Sroubek,et al.  Fast convolutional sparse coding using matrix inversion lemma , 2016, Digit. Signal Process..

[38]  Pierre Vandergheynst,et al.  Shift-invariant dictionary learning for sparse representations: Extending K-SVD , 2008, 2008 16th European Signal Processing Conference.

[39]  Anders P. Eriksson,et al.  Fast Convolutional Sparse Coding , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[40]  Kathrin Klamroth,et al.  Biconvex sets and optimization with biconvex functions: a survey and extensions , 2007, Math. Methods Oper. Res..

[41]  Ali Bahramisharif,et al.  Discovering recurring patterns in electrophysiological recordings , 2017, Journal of Neuroscience Methods.

[42]  M. Taqqu,et al.  Stable Non-Gaussian Random Processes : Stochastic Models with Infinite Variance , 1995 .