Feature extraction for early auditory-visual integration

Sonar operators combine auditory and visual information to make decisions relating to target detection and identification. Many past attempts to automate the role of sonar operators have only considered the visual information and have been unsuccessful. An assessment has been made of the auditory component of the sonar operator role. This information has been used to select three algorithms with the potential for detecting features that discriminate between target types. Results are presented of the application of these algorithms to relevant time series data. Subsequently the usage of these features is discussed in terms of the concept of early auditory-visual integration. (5 pages)

[1]  Geoff Wyvill,et al.  A Smarter Way to Find pitch , 2005, ICMC.

[2]  N. Huang,et al.  The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis , 1998, Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[3]  Thomas S. Huang,et al.  Video Based Person Authentication via Audio/Visual Association , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[4]  Chalapathy Neti,et al.  Recent advances in the automatic recognition of audiovisual speech , 2003, Proc. IEEE.

[5]  Iain Matthews,et al.  Features for Audio-Visual Speech Recognition , 1998 .