Mimicking Sound with Gesture as Interaction Paradigm
暂无分享,去创建一个
[1] R. Gittins,et al. Canonical Analysis: A Review with Applications in Ecology , 1985 .
[2] Lawrence R. Rabiner,et al. A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.
[3] William W. Gaver. How Do We Hear in the World?: Explorations in Ecological Acoustics , 1993 .
[4] Marcelo M. Wanderley,et al. Mapping performer parameters to synthesis engines , 2002, Organised Sound.
[5] Antonio Camurri,et al. Improving the man-machine interface through the analysis of expressiveness in human movement , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.
[6] Alexander Refsum Jensenius,et al. Playing "Air Instruments": Mimicry of Sound-Producing Gestures by Novices and Experts , 2005, Gesture Workshop.
[7] Marc Leman,et al. Communicating expressiveness and affect in multimodal interactive systems , 2005, IEEE MultiMedia.
[8] Norbert Schnell,et al. MnM: a Max/MSP mapping toolbox , 2005, NIME.
[9] Diemo Schwarz,et al. Ftm - Complex Data Structures for Max , 2005, ICMC.
[10] Alexander Refsum Jensenius,et al. Exploring Music-Related Gestures by Sound-Tracing - A Preliminary Study , 2006 .
[11] Perry R. Cook,et al. Feature-Based Synthesis: Mapping Acoustic and Perceptual Features onto Synthesis Parameters , 2006, ICMC.
[12] Norbert Schnell,et al. Wireless sensor interface and gesture-follower for music pedagogy , 2007, NIME '07.
[13] Alexander Refsum Jensenius,et al. Action-sound : developing methods and tools to study music-related body movement , 2007 .
[14] Ginevra Castellano,et al. Expressive control of music and visual media by full-body movement , 2007, NIME '07.
[15] P. Janata,et al. Embodied music cognition and mediation technology , 2009 .