Dance and Robotics

Recent generations of humanoid robots increasingly resemble humans in shape and articulatory capacities. This progress has motivated researchers to design dancing robots that can mimic the complexity and style of human choreographic dancing. Such complicated actions are usually programmed manually and ad hoc. However, this approach is both tedious and inflexible. Researchers at the University of Tokyo have developed the learning-from-observation (LFO) training method to overcome this difficulty [1, 2]. LFO enables a robot to acquire knowledge of what to do and how to do it from observing human demonstrations. Direct mapping from human joint angles to robot joint angles doesn’t work well because of the dynamic and kinematic differences between the observed person and the robot (for example, weight, balance, and arm and leg lengths). LFO therefore relies on predesigned task models, which represent only the actions (and features thereof) that are essential to mimicry. It uses task models to recognize and parse the sequence of human actions—for example, “Now, pick up the box.” Then it adapts these actions to the robot’s morphology and dynamics so that it can mimic the movement. This indirect, two-step mapping is crucial for robust imitation and performance. LFO has been successfully applied to various hand-eye operations [1, 2]. Here we describe how to extend it to a dancing humanoid.