Searching for Cross-Individual Relationships between Sound and Movement Features using an SVM Classifier

In this paper we present a method for studying relationships between features of sound and features of movement. The method has been tested by carrying out an experiment with people moving an object in space along with short sounds. 3D position data of the object was recorded and several features were calculated from each of the recordings. These features were provided as input to a classier which was able to classify the recorded actions satisfactorily, particularly when taking into account that the only link between the actions performed by the dierent subjects was the sound they heard while making the action.

[1]  David Wessel,et al.  Neural networks for simultaneous classification and parameter estimation in musical instrument control , 1992, Defense, Security, and Sensing.

[2]  Alexander Refsum Jensenius,et al.  Extracting Action-Sound Features From a Sound-Tracing Study , 2010 .

[3]  Eoin Brazil Proceedings of the 2002 conference on New interfaces for musical expression , 2002 .

[4]  David G. Stork,et al.  Pattern Classification , 1973 .

[5]  Antonio Camurri,et al.  Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library , 2003, Gesture Workshop.

[6]  David G. Stork,et al.  Pattern classification, 2nd Edition , 2000 .

[7]  Rolf Inge Godøy,et al.  Gestural-Sonorous Objects: embodied extensions of Schaeffer's conceptual apparatus , 2006, Organised Sound.

[8]  Arshia Cont,et al.  Real-time Gesture Mapping in Pd Environment using Neural Networks , 2004, NIME.

[9]  Frédéric Bevilacqua,et al.  Gesture Analysis of Violin Bow Strokes , 2005, Gesture Workshop.

[10]  Perry R. Cook,et al.  A Meta-Instrument for Interactive, On-the-Fly Machine Learning , 2009, NIME.

[11]  Meinard Müller,et al.  Motion templates for automatic classification and retrieval of motion capture data , 2006, SCA '06.

[12]  Norbert Schnell,et al.  MnM: a Max/MSP mapping toolbox , 2005, NIME.

[13]  Geoffrey E. Hinton,et al.  Glove-Talk: a neural network interface between a data-glove and a speech synthesizer , 1993, IEEE Trans. Neural Networks.

[14]  Erwin Schoonderwaldt,et al.  Extraction of bowing parameters from violin performance combining motion capture and sensors. , 2009, The Journal of the Acoustical Society of America.

[15]  Alexander Refsum Jensenius,et al.  Exploring Music-Related Gestures by Sound-Tracing - A Preliminary Study , 2006 .