A Paradigm for Physical Interaction with Sound in 3-D Audio Space

Immersive virtual environments offer the possibility of natural interaction with a virtual scene that is familiar to users because it is based on everyday activity. The use of such environments for the representation and control of interactive musical systems remains largely unexplored. We propose a paradigm for working with sound and music in a physical context, and develop a framework that allows for the creation of spatialized audio scenes. The framework uses structures called soundNodes, soundConnections, and DSP graphs to organize audio scene content, and offers greater control compared to other representations. 3-D simulation with physical modelling is used to define how audio is processed, and can offer users a high degree of expressive interaction with sound, particularly when the rules for sound propagation are bent. Sound sources and sinks are modelled within the scene along with the user/listener/performer, creating a navigable 3-D sonic space for sound-engineering, musical creation, listening and performance.