Retargetting Example Sounds to Interactive Physics-driven Animations

This paper proposes a new method to generate audio in the context of interactive animations driven by a physics engine. Our approach aims at bridging the gap between direct playback of audio recordings and physically-based synthesis by retargetting audio grains extracted from the recordings according to the output of a physics engine. In an off-line analysis task, we automatically segment audio recordings into atomic grains. The segmentation depends on the type of contact event and we distinguished between impulsive events, e.g. impacts or breaking sounds, and continuous events, e.g. rolling or sliding sounds. We segment recordings of continuous events into sinusoidal and transient components, which we encode separately. A technique similar to matching pursuit is used to represent each original recording as a compact series of audio grains. During interactive animations, the grains are triggered individually or in sequence according to parameters reported from the physics engine and/or user-defined procedures. A first application is simply to reduce the size of the original audio assets. Above all, our technique allows to synthesize non-repetitive sounding events and provides extended authoring capabilities.

[1]  Koshi Adachi,et al.  The roughness effect on the frequency of frictional sound , 2007 .

[2]  Perry R. Cook,et al.  Real Sound Synthesis for Interactive Applications , 2002 .

[3]  S. Dixon ONSET DETECTION REVISITED , 2006 .

[4]  Curtis Roads,et al.  Asynchronous granular synthesis , 1991 .

[5]  TsingosNicolas,et al.  Fast modal sounds with scalable frequency-domain synthesis , 2008 .

[6]  George Drettakis,et al.  Fast modal sounds with scalable frequency-domain synthesis , 2008, ACM Trans. Graph..

[7]  Dinesh K. Pai,et al.  Scanning physical interaction behavior of 3D objects , 2001, SIGGRAPH.

[8]  Xavier Serra,et al.  Musical Sound Modeling with Sinusoids plus Noise , 1997 .

[9]  Ming C. Lin,et al.  Interactive sound synthesis for large scale environments , 2006, I3D '06.

[10]  Perry R. Cook,et al.  Modeling Bill's Gait: Analysis and Parametric Synthesis of Walking Sounds , 2002 .

[11]  Davide Rocchesso,et al.  Physically-based audio rendering of contact , 2002, Proceedings. IEEE International Conference on Multimedia and Expo.

[12]  Emmanuel Vincent,et al.  Instrument-Specific Harmonic Atoms for Mid-Level Music Representation , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[13]  Diemo Schwarz Concatenative sound synthesis: The early years , 2006 .

[14]  E. Resina,et al.  Vocem An Application for Real-Time Granular Synthesis , 1998 .

[15]  Curtis Roads,et al.  AUDIO ANALYSIS, VISUALIZATION, AND TRANSFORMATION WITH THE MATCHING PURSUIT ALGORITHM , 2004 .

[16]  Dinesh K. Pai,et al.  FoleyAutomatic: physically-based sound effects for interactive simulation and animation , 2001, SIGGRAPH.

[17]  Miller Puckette Low-dimensional parameter mapping using spectral envelopes , 2004, ICMC.

[18]  Yoshinori Dobashi,et al.  Real-time rendering of aerodynamic sound using sound textures based on computational fluid dynamics , 2003, ACM Trans. Graph..

[19]  Chen Shen,et al.  Synthesizing sounds from rigid-body simulations , 2002, SCA '02.

[20]  Stéphane Mallat,et al.  Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..

[21]  Tapio Takala,et al.  Sound rendering , 1992, SIGGRAPH.

[22]  Dinesh K. Pai,et al.  Synthesis of shape dependent sounds with physical modeling , 1996 .