TAPESTREA: sound scene modeling by example

We present a new paradigm and framework for creating highquality “sound scenes” from a set of recordings. A sound scene is a combination of background and foreground sounds that together evoke the sense of being in a specific environment. The ability to craft and control sound scenes is important in entertainment (movies, TV, games), virtual/augmented reality, art projects (live performances, installations) and other multimedia applications. Existing audio production tools require “untainted” versions of sound components and frequently involve tedious event-by-event editing. No system, to our knowledge, provides an arena for truly flexible “sound scene modeling by example,” where a sound scene can be composed from selected, extracted, and separated components of different existing scenes. We introduce a parametric, unified framework of analysis, transformation and synthesis techniques that allow users to interactively select components from existing sounds, transform these independently, and controllably recombine them to create new sound scenes in real-time. We call this system TAPESTREA: Techniques and Paradigms for Expressive Synthesis, Transformation and Rendering of Environmental Audio.