Keynote speaker: Towards immersive multimodal display: Interactive auditory rendering for complex virtual environments

Extending the frontier of visual computing, an interactive, multimodal VR environment utilizes audio and touch-enabled interfaces to communicate information to a user and augment the graphical rendering. By harnessing other sensory channels, an immersive multimodal display can further enhance a user's experience in a virtual world. In addition to immersive environments, multimodal display can provide a natural and intuitive human-computer interface for many desktop applications such as computer games, online virtual worlds, visualization, simulation, and training. Compared to visual and haptic rendering, sound rendering has extremely demanding computing requirements, making the problem of auditory display highly challenging. In this talk, I will give an overview of our recent work on interactive auditory display consisting of sound synthesis and sound propagation. These include generating realistic physically-based sounds from perceptually-guided principles and dynamic simulation. I will also describe novel algorithms for immersive sound effects based on improved numerical techniques and fast geometric sound propagation. Finally, I present new techniques on cross-modal interaction for VR. These systems improve the state of the art in sound rendering by at least one to two orders of magnitude and will be demonstrated in complex, dynamic virtual environments and VR applications. I conclude by discussing possible future research directions on multimodal interaction with VR systems.