On the Use of Auditory Representations for Sparsity-Based Sound Source Separation

Sparsity-based source separation algorithms often rely on a transformation into a sparse domain to improve mixture disjointness and therefore facilitate separation. To this end, the most commonly used time-frequency representation has been the short time Fourier transform (STFT). The purpose of this paper is to study the use of auditory-based representations instead of the STFT. We first evaluate the STFT disjointness properties for the case of speech and music signals, and show that auditory representations based on the equal rectangular bandwidth (ERB) and Bark frequency scales can improve the disjointness of the transformed mixtures