Human-robot ensemble between robot thereminist and human percussionist using coupled oscillator model

This paper presents a novel synchronizing method for a human-robot ensemble using coupled oscillators. We define an ensemble as a synchronized performance produced through interactions between independent players. To attain better synchronized performance, the robot should predict the human's behavior to reduce the difference between the human's and robot's onset timings. Existing studies in such synchronization only adapts to onset intervals, thus, need a considerable time to synchronize. We use a coupled oscillator model to predict the human's behavior. Experimental results show that our method reduces the average of onset time errors; when we use a metronome, a tempo-varying metronome or a human drummer, errors are reduced by 38%, 10% or 14% on the average, respectively. These results mean that the prediction of human's behaviors is effective for the synchronized performance.

[1]  Tetsuya Ogata,et al.  Music-Ensemble Robot That Is Capable of Playing the Theremin While Listening to the Accompanied Music , 2010, IEA/AIE.

[2]  Atsuo Takanishi,et al.  Development of Waseda flutist robot WF-4RIV: Implementation of auditory feedback system , 2008, 2008 IEEE International Conference on Robotics and Automation.

[3]  Christopher Raphael,et al.  A Probabilistic Expert System for Automatic Musical Accompaniment , 2001 .

[4]  Yoichi Muraoka,et al.  A Jazz Session System for Interplay Among All Players - VirJa Session (Virtual Jazz Session System) , 1996, ICMC.

[5]  Shigeki Sugano,et al.  WABOT-2: Autonomous robot with dexterous finger-arm--Finger-arm coordination control in keyboard performance , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[6]  Atsuo Takanishi,et al.  Development of anthropomorphic musical performance robots: From understanding the nature of music performance to its application to entertainment robotics , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Ikkyu Aihara,et al.  Modeling synchronized calling behavior of Japanese tree frogs. , 2009, Physical review. E, Statistical, nonlinear, and soft matter physics.

[8]  Frank A. Russo,et al.  Audio-visual integration of emotional cues in song , 2008 .

[9]  Nicholas J. Bailey,et al.  Visualising Musical Structure Through Performance Gesture , 2009, ISMIR.

[10]  Atsuo Takanishi,et al.  Development of a aural real-time rhythmical and harmonic tracking to enable the musical interaction with the Waseda Flutist Robot , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Kazuyoshi Yoshii,et al.  A robot uses its own microphone to synchronize its steps to musical beats while scatting and singing , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  D. Marquardt An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .

[13]  Roger B. Dannenberg,et al.  An On-Line Algorithm for Real-Time Accompaniment , 1984, ICMC.

[14]  Gil Weinberg,et al.  The Creation of a Multi-Human, Multi-Robot Interactive Jam Session , 2009, NIME.

[15]  Chris Arney Sync: The Emerging Science of Spontaneous Order , 2007 .

[16]  Dan Morris,et al.  MySong: automatic accompaniment generation for vocal melodies , 2008, CHI.

[17]  Tetsuya Ogata,et al.  Thereminist robot: Development of a robot theremin player with feedforward and feedback arm control based on a Theremin's pitch model , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  Yoshiki Kuramoto,et al.  Chemical Oscillations, Waves, and Turbulence , 1984, Springer Series in Synergetics.