Interactive Sonification of the Spatial Behavior of Human and Synthetic Characters in a Mixed-Reality Environment

It is generally admitted that music is a powerful carrier of emotions [4, 21], and that audition can play an important role in enhancing the sensation of presence in Virtual Environments [5, 22]. In mixed-reality environments and interactive multi-media systems such as Massively Multiplayer Online Games (MMORPG), the improvement of the user’s perception of immersion is crucial. Nonetheless, the sonification of those environments is often reduced to its simplest expression, namely a set of prerecorded sound tracks. Background music many times relies on repetitive, predetermined and somewhat predictable musical material. Hence, there is a need for a sonification scheme that can generate context sensitive, adaptive, rich and consistent music in real-time. In this paper we introduce a framework for the sonification of spatial behavior of multiple human and synthetic characters in a Mixed-Reality environment. Previously we have used RoBoser [1] to sonify different interactive installation including the interaction between humans and a large-scale accessible space called Ada [2] Here we are investigating the applicability of the RoBoser framework to the sonification of the continuous and dynamic interaction between individuals populating a mixed-reality space. We propose a semantic layer that maps sensor data into intuitive parameters for the control of music generation, and show that the musical events are directly influenced by the spatial behavior of human and synthetic characters in the space, thus creating a behavior-dependant sonification that enhance the user’s perception of immersion.

[1]  Stefania Serafin,et al.  Sound Design to Enhance Presence in Photorealistic Virtual Reality , 2004, ICAD.

[2]  Jônatas Manzolli,et al.  Roboser: A Real-World Composition System , 2005, Computer Music Journal.

[3]  P. Juslin,et al.  Emotional Expression in Music Performance: Between the Performer's Intention and the Listener's Experience , 1996 .

[4]  Miller Puckette,et al.  Pure Data , 1997, ICMC.

[5]  Mariano Alcañiz,et al.  Virtual food in virtual environments for the treatment of eating disorders. , 2002, Studies in health technology and informatics.

[6]  Paul F. M. J. Verschure,et al.  Live Soundscape Composition Based on Synthetic Emotions , 2003, IEEE Multim..

[7]  Kynan Eng,et al.  Cognitive Virtual-Reality Based Stroke Rehabilitation , 2007 .

[8]  Robert Rowe,et al.  Interactive Music Systems: Machine Listening and Composing , 1992 .

[9]  Matthew Wright,et al.  Open SoundControl: A New Protocol for Communicating with Sound Synthesizers , 1997, ICMC.

[10]  William R. Sherman,et al.  Understanding Virtual RealityInterface, Application, and Design , 2002, Presence: Teleoperators & Virtual Environments.

[11]  Leonard B. Meyer Emotion and Meaning in Music , 1957 .

[12]  A. Gabrielsson,et al.  The influence of musical structure on emotional expression. , 2001 .

[13]  Robert H. Gilkey,et al.  The Sense of Presence for the Suddenly Deafened Adult:Implications for Virtual Environments , 1995, Presence: Teleoperators & Virtual Environments.

[14]  W. Dowling Emotion and Meaning in Music , 2008 .

[15]  Tobi Delbrück,et al.  Ada: constructing a synthetic organism , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  P. König,et al.  A Model of the Ventral Visual System Based on Temporal Stability and Local Memory , 2006, PLoS biology.

[17]  Márcio O. Costa,et al.  Design for a Brain Revisited: The Neuromorphic Design and Functionality of the Interactive Space 'Ada' , 2003, Reviews in the neurosciences.

[18]  J. Sloboda,et al.  Music and emotion: Theory and research , 2001 .

[19]  Marcelo M. Wanderley,et al.  ESCHER-modeling and performing composed instruments in real-time , 1998, SMC'98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218).

[20]  P. Laukka,et al.  Communication of emotions in vocal expression and music performance: different channels, same code? , 2003, Psychological bulletin.

[21]  Ville Pulkki,et al.  Virtual Sound Source Positioning Using Vector Base Amplitude Panning , 1997 .

[22]  John Chowning,et al.  Fm Theory and Applications: By Musicians for Musicians , 1987 .

[23]  Paul F. M. J. Verschure,et al.  IQR: a distributed system for real-time real-world neuronal simulation , 2002, Neurocomputing.

[24]  Thomas Hermann,et al.  An introduction to interactive sonification , 2005 .