Estimation of essential interactions from multiple demonstrations

To learn a new everyday task under the "Learning from Observation" framework, the system needs to detect which parts of the demonstration are essential to complete the task without task-dependent knowledge. In the previous research, we proposed a technique to estimate essential interactions in a task by integrating multiple demonstrations which represent virtually the same task. Although, the technique could automatically segment the essential interactions and determine the number of the interactions, the segmentation algorithm depends on some heuristics and only stationary interactions could be obtained. In this paper, a novel technique is proposed, which overcomes this limitation and can estimate almost any types of interactions. In this approach, a demonstrator needs to give a explicit signal once during each essential interaction as a hint on the occurrence of the essential interaction. From visual information and these signals, the system automatically analyzes the essential parts of the task and their periods, and also detects which environmental objects are interacted with the manipulated object. These information is hard to be obtained from a single demonstration, because of the ambiguity in interpreting the interaction especially in cluttered environment. The proposed method is evaluated in a simulation and also in a real world by using a humanoid robot.

[1]  Katsushi Ikeuchi,et al.  Modeling manipulation interactions by hidden Markov models , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[3]  Katsushi Ikeuchi,et al.  Toward an assembly plan from observation. I. Task recognition with polyhedral objects , 1994, IEEE Trans. Robotics Autom..

[4]  Nathan Delson,et al.  Robot programming by human demonstration: adaptation and inconsistency in constrained motion , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[5]  Jean-Claude Latombe,et al.  An Approach to Automatic Robot Programming Based on Inductive Learning , 1984 .

[6]  Hiroshi Kimura,et al.  Extraction of fine motion through multiple observations of human demonstration by DP matching and combined template matching , 2001, Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No.01TH8591).

[7]  Martin Hägele,et al.  Dependable Interaction with an Intelligent Home Care Robot , 2001 .

[8]  Katsushi Ikeuchi,et al.  Acquiring hand-action models by attention point analysis , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[9]  Katsushi Ikeuchi,et al.  Symbolic representation of trajectories for skill generation , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).