Development of an Audio-Visual Saliency Map
暂无分享,去创建一个
General Presentation of the Research Domain The focus of the REVES research group is on image and sound synthesis for virtual environments. Our research is on the development of new algorithms to treat complex scenes in real time, both for image rendering (for example the capture and rendering of trees using an image-based technique [1]) or for sound (for example using perceptual masking and clustering to render complex sound scenes [2]). We are coordinating the new EU project IST/FET CROSSMOD, which starts on December 1 2005, on the perceptual interaction between the audio and visual channel and the effects of this interaction on rendering and user attention for both sound and images.
[1] George Drettakis,et al. Perceptual audio rendering of complex virtual environments , 2004, ACM Trans. Graph..
[2] George Drettakis,et al. Volumetric reconstruction and interactive rendering of trees from photographs , 2004, SIGGRAPH 2004.
[3] Kurt Debattista,et al. Snapshot: A Rapid Technique for Driving Global Illumination Rendering , 2005, WSCG.
[4] Michael T. Lippert,et al. Mechanisms for Allocating Auditory Attention: An Auditory Saliency Map , 2005, Current Biology.