Separating Sound Sources Using their Orientation Obtained by Localization Each

We propose a method for sound source separation. The only informations we need for this method is temporal disparities of arrivals of sound sources at every pair of microphones. For this popose we linked our separating system to a localizing system. The localizing system can localize sound sources using temporal disparities at the onsets that are not mixed with ongoing partions of other sound sources. Several experiments were carried out in an anechoic chamber. We could separate two sound sources (speech sounds), located with azimuth difference of 38°, resulting in 25 dB average attenuation ratio, by a DSP system. The separable azimuth difference was more than 15°. And 5°localization error caused about 6 dB reduction in attenuation ratio.