A Music Expressive Communication with Sensor - Doll Interface

We propose a music expression system which generates music by interaction between the user and a sensorequipped doll named \Com-music." Since the sensor-doll includes various sensors and a PC, it can detect not only raw data but also pre-de ned gestures and contexts using HMMs (Hidden Markov Models). The doll has ve levels of interaction as pre-de ned contexts, that correspond to the strength and the frequency of the interaction with the user. Each interaction level has di erent set of music control mappings, so the doll reacts with music expressions correspondent to context. In this paper, we consider the sensor-doll system as a device of the new type of communication, which uses music expressions as the communication media.

[1]  Masayuki Inaba,et al.  Model and processing of whole-body tactile sensor suit for human-robot contact interaction , 1998, Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146).

[2]  Erik Strommen,et al.  When the interface is a talking dinosaur: learning across media with ActiMates Barney , 1998, CHI.

[3]  Alex Pentland,et al.  Recognizing user context via wearable sensors , 2000, Digest of Papers. Fourth International Symposium on Wearable Computers.

[4]  Gregory D. Abowd,et al.  Cyberguide: A mobile context‐aware tour guide , 1997, Wirel. Networks.

[5]  Bruce Blumberg,et al.  Sympathetic interfaces: using a plush toy to direct synthetic characters , 1999, CHI '99.

[6]  Yasuyuki Sumi,et al.  C-MAP: Building a Context-Aware Mobile Assistant for Exhibition Tours , 1998, Community Computing and Support Systems.