Automatic Speech Separation Enables Brain-Controlled Hearable Technologies
暂无分享,去创建一个
Nima Mesgarani | James A. O'Sullivan | Cong Han | Yi Luo | James O’Sullivan | Jose Herrero | Ashesh D. Mehta | N. Mesgarani | Yi Luo | J. Herrero | Cong Han | A. Mehta
[1] Nima Mesgarani,et al. Speaker-independent auditory attention decoding without access to clean speech sources , 2019, Science Advances.
[2] Bahar Khalighinejad,et al. Towards reconstructing intelligible speech from the human auditory cortex , 2018, bioRxiv.
[3] Dong Yu,et al. Multitalker Speech Separation With Utterance-Level Permutation Invariant Training of Deep Recurrent Neural Networks , 2017, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[4] DeLiang Wang,et al. Supervised Speech Separation Based on Deep Learning: An Overview , 2017, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[5] Zhuo Chen,et al. Neural decoding of attentional selection in multi-speaker environments without access to clean sources , 2017, Journal of neural engineering.
[6] Nima Mesgarani,et al. Speaker-Independent Speech Separation With Deep Attractor Network , 2017, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[7] Thomas Lunner,et al. Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech , 2016, bioRxiv.
[8] DeLiang Wang,et al. Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises. , 2016, The Journal of the Acoustical Society of America.
[9] Christoph E Schreiner,et al. Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli , 2016, The Journal of Neuroscience.
[10] Zhuo Chen,et al. Deep clustering: Discriminative embeddings for segmentation and separation , 2015, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[11] John J. Foxe,et al. Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG. , 2015, Cerebral cortex.
[12] Maarten De Vos,et al. Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications , 2015, Journal of neural engineering.
[13] N. Mesgarani,et al. Selective cortical representation of attended speaker in multi-talker speech perception , 2012, Nature.
[14] J. Maunsell,et al. Different Origins of Gamma Rhythm and High-Gamma Activity in Macaque Visual Cortex , 2011, PLoS biology.
[15] S. David,et al. Influence of context and behavior on stimulus reconstruction from neural activity in primary auditory cortex. , 2009, Journal of neurophysiology.
[16] D S Brungart,et al. Informational and energetic masking effects in the perception of two simultaneous talkers. , 2001, The Journal of the Acoustical Society of America.
[17] R. Plomp. Noise, Amplification, and Compression: Considerations of Three Main Issues in Hearing Aid Design , 1994, Ear and hearing.
[18] T W Tillman,et al. Interaction of competing speech signals with hearing losses. , 1970, Archives of otolaryngology.
[19] Henning Puder,et al. Signal Processing in High-End Hearing Aids: State of the Art, Challenges, and Future Trends , 2005, EURASIP J. Adv. Signal Process..
[20] G. Strang. Introduction to Linear Algebra , 1993 .