Towards a multi-layer architecture for multi-modal rendering of expressive actions

Expressive content has multiple facets that can be conveyed by music, gesture, actions. Different application scenarios can require different metaphors for expressiveness control. In order to meet the requirements for flexible representation, we propose a multi-layer architecture structured into three main levels of abstraction. At the top (user level) there is a semantic description, which is adapted to specific user requirements and conceptualization. At the other end are low-level features that describe parameters strictly related to the rendering model. In between these two extremes, we propose an intermediate layer that provides a description shared by the various high-level representations on one side, and that can be instantiated to the various low-level rendering models on the other side. In order to provide a common representation of different expressive semantics and different modalities, we propose a physically-inspired description specifically suited for expressive actions.

[1]  Davide Rocchesso,et al.  Friction Sounds for Sensorial Substitution , 2004, ICAD.

[2]  Annie Luciani,et al.  Physically-based particle modeling for dance verbs , 2005 .

[3]  Anders Friberg,et al.  Emotional Coloring of Computer-Controlled Music Performances , 2000, Computer Music Journal.

[4]  Anders Friberg,et al.  Performance Rules for Computer-Controlled Contemporary Keyboard Music , 1991 .

[5]  Shuji Hashimoto,et al.  Robotic interface for embodied interaction via dance and musical performance , 2004, Proceedings of the IEEE.

[6]  Marc Leman,et al.  Communicating expressiveness and affect in multimodal interactive systems , 2005, IEEE MultiMedia.

[7]  Nicola Orio,et al.  An Evaluation Study on Music Perception for Music Content-based Information Retrieval , 2000, ICMC.

[8]  Annie Luciani,et al.  From action to sound: a challenging perspective for haptics , 2005, First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics Conference.

[9]  Annie Luciani Dynamics as a common criterion to enhance the sense of Presence in Virtual environments , 2004 .

[10]  Claude Cadoz,et al.  Cordis-anima: A modeling and simulation system for sound and image synthesis , 1993 .

[11]  M. Imberty Les écritures du temps , 1981 .

[12]  Davide Rocchesso,et al.  Low-level sound models: resonators, interactions, surface textures , 2003 .

[13]  N. Todd A Model of Expressive Timing in Tonal Music , 1985 .

[14]  B. Repp Diversity and commonality in music performance: an analysis of timing microstructure in Schumann's "Träumerei". , 1992, The Journal of the Acoustical Society of America.

[15]  B H Repp,et al.  Patterns of expressive timing in performances of a Beethoven minuet by nineteen famous pianists. , 1988, The Journal of the Acoustical Society of America.

[16]  Nell P. McAngusTodd,et al.  The dynamics of dynamics: A model of musical expression , 1992 .

[17]  Sergio Canazza,et al.  A Model To Add Expressiveness To Automatic Musical Performance , 1998, ICMC.

[18]  Giovanni De Poli,et al.  Recognition of Musical Gestures in Known Pieces and in Improvisations , 2003, Gesture Workshop.

[19]  Claude Cadoz,et al.  A Goals-Based Review of Physical Modelling , 2005, ICMC.

[20]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[21]  Emilios Cambouropoulos,et al.  The Local Boundary Detection Model (LBDM) and its Application in the Study of Expressive Timing , 2001, ICMC.

[22]  M Clynes Sentography: dynamic forms of communication of emotion and qualities. , 1973, Computers in biology and medicine.

[23]  Norman I. Badler,et al.  The EMOTE model for effort and shape , 2000, SIGGRAPH.

[24]  Carlo Drioli,et al.  Modeling and control of expressiveness in music performance , 2004, Proceedings of the IEEE.

[25]  Barney M. Berlin,et al.  Size , 1989, Encyclopedia of Evolutionary Psychological Science.

[26]  B. Repp Patterns of expressive timing in performances of a Beethoven minuet by nineteen famous pianists. , 1990 .

[27]  Roberto Bresin,et al.  Artificial neural networks based models for automatic performance of musical scores , 1998 .

[28]  Claude Cadoz The Physical Model as Metaphor for Musical Creation: "pico..TERA", a piece generated by physical model , 2002, ICMC.

[29]  Rudolf von Laban,et al.  Effort: economy in body movement , 1974 .

[30]  J. Sloboda,et al.  Music and emotion: Theory and research , 2001 .

[31]  William W. Gaver What in the World Do We Hear? An Ecological Approach to Auditory Event Perception , 1993 .

[32]  Claude Cadoz,et al.  GENESIS: A Friendly Musician-Oriented Environment for Mass-Interaction Physical Modeling , 2002, ICMC.

[33]  N. Todd The dynamics of dynamics: A model of musical expression , 1992 .

[34]  Davide Rocchesso,et al.  Size, shape, and material properties of sound models , 2003 .