Soundscape Generation for Virtual Environments using Community-Provided Audio Databases

This research focuses on the generation of soundscapes using unstructured sound databases for the sonification of virtual environments. The design methodology incorporates the use of concatenative synthesis to construct a sound environment using online community-provided sonic material, and an application of this methodology is described in which sound environments are generated for Google Street View using the online sound database Freesound. Furthermore, the model allows for the creation of augmented soundscapes using parameterization models as an input to the resynthesis paradigm, which incorporates multiple source and textural layers. A subjective evaluation of this application was performed to compare the immersive properties of the generated soundscapes with those of real recordings, and the results suggest a general preference to the generated soundscapes incorporating sound design principles. The potential for further research and future applications in the area of augmented reality are discussed incorporating emerging web media technologies.