A computationally efficient blind source separation for hearing aid applications and its real-time implementation on smartphone

Conventional Blind Source Separation (BSS) techniques are computationally complex. This is due to the calculation of the demixing matrix for the entire signal or due to the frequent update of the demixing matrix at every time frame index, making them impractical to use in many real-time applications. In this paper, a robust, neural network based two-microphone sound source localization method is used as a criterion to enhance the efficiency of the Independent Vector Analysis (IVA), a BSS method. IVA is used to separate speech and noise sources which are convolutedly mixed. The practical usability of the proposed method is proved by implementing it on a smartphone to separate speech and noise in real-world scenarios for hearing-aid applications. The experimental results with objective and subjective tests reveal the usefulness of the developed method for real-world applications.