Robotic Discovery of the Auditory Scene

In this work, we describe an autonomous mobile robotic system for finding and investigating ambient noise sources in the environment. Motivated by the large negative effect of ambient noise sources on robot audition, the long-term goal is to provide awareness of the auditory scene to a robot, so that it may more effectively act to filter out the interference or re-position itself to increase the signal-to-noise ratio. Here, we concentrate on the discovery of new sources of sound through the use of mobility and directed investigation. This is performed in a two-step process. In the first step, a mobile robot first explores the surrounding acoustical environment, creating evidence grid representations to localize the most influential sound sources in the auditory scene. Then in the second step, the robot investigates each potential sound source location in the environment so as to improve the localization result, and identify volume and directionality characteristics of the sound source. Once every source has been investigated, a noise map of the entire auditory scene is created for use by the robot in avoiding areas of loud ambient noise when performing an auditory task.

[1]  Alan C. Schultz,et al.  Continuous localization using evidence grids , 1998, Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146).

[2]  Ronald C. Arkin,et al.  Noise maps for acoustically sensitive navigation , 2004, SPIE Optics East.

[3]  Hiroshi Mizoguchi,et al.  Multiple Sound Source Mapping for a Mobile Robot by Self-motion Triangulation , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Hiroaki Kitano,et al.  Epipolar geometry based sound localization and extraction for humanoid audition , 2001, Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180).

[5]  M Rucci,et al.  Registration of Neural Maps through Value-Dependent Learning: Modeling the Alignment of Auditory and Visual Maps in the Barn Owl’s Optic Tectum , 1997, The Journal of Neuroscience.

[6]  Eric Martinson,et al.  Auditory Evidence Grids , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  E I Knudsen,et al.  Topographic projection from the optic tectum to the auditory space map in the inferior colliculus of the barn owl , 2000, The Journal of comparative neurology.

[8]  Karsten P. Ulland,et al.  Vii. References , 2022 .

[9]  Parham Aarabi,et al.  Enhanced sound localization , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Jie Huang,et al.  A model-based sound localization system and its application to robot navigation , 1999, Robotics Auton. Syst..

[11]  Alade O. Tokuta,et al.  trulla : An Algorithm For Path Planning Among Weighted Regions By Localized Propagations , 1992, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.