Omni-directional Audio-Visual Speaker Detection for Mobile Robot

Tracking humans' position is a useful skill for the coming generation of mobile robot. It is a challenging problem of planning and control in dynamic environment. We propose the omni-directional estimation method of speaker's position using the combination of audio and visual information. Estimation of the position of the sound is carried out to calculate the difference of arrival time from sound source to multi-channel microphones. The robust human template matching on the omni-directional image is employed to combine the result of sound source estimation to realize a highly accurate estimation of speaker's location. In our experiments, the systems were implemented and tested on an omni-directional robot at our laboratory. The results show that we are able to reliably detect and track moving objects in natural environment.

[1]  Parham Aarabi,et al.  EURASIP Journal on Applied Signal Processing 2003:4, 338–347 c ○ 2003 Hindawi Publishing Corporation The Fusion of Distributed Microphone Arrays for Sound Localization , 2002 .

[2]  Masahide Kaneko,et al.  Probabilistic integration of audiovisual information to localize sound source in human-robot interaction , 2003, The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..

[3]  Larry S. Davis,et al.  Look who's talking: speaker detection using video and audio correlation , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[4]  Jean Rouat,et al.  Robust sound source localization using a microphone array on a mobile robot , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).