Neural decoding of attentional selection in multi-speaker environments without access to separated sources

People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Modern hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation without knowing which speaker is being attended to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. A number of challenges exist, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. We present an end-to-end system that 1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener's neural signals, 2) automatically separates the individual speakers in the mixture, 3) determines the attended speaker, and 4) amplifies the attended speaker's voice to assist the listener. Using invasive electrophysiology recordings, our system is able to decode the attention of a subject and detect switches in attention using only the mixed audio. We also identified the regions of the auditory cortex that contribute to AAD. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearing aids.

[1]  N. Mesgarani,et al.  Selective cortical representation of attended speaker in multi-talker speech perception , 2012, Nature.

[2]  Simon Van Eyndhoven,et al.  Adaptive attention-driven speech enhancement for EEG-informed hearing prostheses , 2016, 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

[3]  Jackie L. Clark,et al.  Technology for hearing loss – as We Know it, and as We Dream it , 2014, Disability and rehabilitation. Assistive technology.

[4]  Björn W. Schuller,et al.  Discriminatively trained recurrent neural networks for single-channel speech separation , 2014, 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP).

[5]  Maarten De Vos,et al.  Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications , 2015, Journal of neural engineering.

[6]  Stefan Debener,et al.  Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison , 2016, Journal of neural engineering.

[7]  John J. Foxe,et al.  At what time is the cocktail party? A late locus of selective attention to natural speech , 2012, The European journal of neuroscience.

[8]  Yifan Gong,et al.  An Overview of Noise-Robust Automatic Speech Recognition , 2014, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[9]  Alexander Bertrand,et al.  EEG-Informed Attended Speaker Extraction From Recorded Speech Mixtures With Application in Neuro-Steered Hearing Prostheses , 2016, IEEE Transactions on Biomedical Engineering.

[10]  A. Wingfield,et al.  The Neural Consequences of Age-Related Hearing Loss , 2016, Trends in Neurosciences.

[11]  John J. Foxe,et al.  Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG. , 2015, Cerebral cortex.

[12]  J. Simon,et al.  Emergence of neural encoding of auditory objects while listening to competing speakers , 2012, Proceedings of the National Academy of Sciences.