Model based interactive sound for an immersive virtual environment

We discuss an audio rendering pipeline that provides real-time interactive sound synthesis for virtual environments. Sounds are controlled by computational models including experimental scientific systems. We discuss composition protocols and software architecture for hierarchical control and for synchronization with graphics. Rendering algorithms are presented for producing sound from a physically-based simulation of a chaotic and from higher-dimensional topological structures. Existing computer music systems provide some of these capabilities in specialized hardware. Rather than adopt existing music systems we have focused on the importance of demonstrating that sound synthesis is relevant for general-purpose computing. We want researchers to have immediate access to sound computation in the same language, operating system and control flow that supports standard computing and graphics rendering engines. Therefore our pipeline is written in unix/c/c++ to maintain potential portability and scalability and stay close to graphics architectures and their user communities. In this paper we discuss the implementation of a rendering pipeline designed to bring sound synthesis and composition as research components into virtual environments (VE). We find that VE research provides a platform for projects closely related to computer music composition. We also find the VE research community is interested in the potential relevance of composition for their work, and the relevance of their work for composers. We have been developing a software-based sound synthesis and composition protocol to enhance the possibilities of collaboration. This protocol defines a pipeline from computational models to sounds. Along this pipeline we identify endeavors related to computer music including real-time sound synthesis, gesture-based interaction, composition algorithms, physically-based sound production models, and techniques for synchronizing sound with graphical events.