Intention deduction from demonstrated trajectory for tool-handling task

When the robot comes to a home-like environment, its programming becomes very demanding. The concept of learning by demonstration is thus introduced, which may remove the load of detailed analysis and programming from the user. Following this concept, in this article, we propose a novel approach for the robot to deduce the intention of the demonstrator from the trajectories during task execution. We focus on the tool-handling task, which is common in the home environment, but complicated for analysis. The proposed approach does not pre-define motions or put constraints on motion speed, while allowing the event order to be altered and allowing for the presence of redundant operations during the demonstration. We apply the concept of cross-validation to locate the portions of the trajectory that correspond to delicate and skillful maneuvering, and apply an algorithm based on dynamic programming previously developed to search for the most probable intention. In experiments, we applied the proposed approach for two different kinds of tasks, the pouring and coffee-making tasks, with the number of objects and their locations varied during demonstrations. To further investigate our method's scalability and generality, we also performed intensive analysis on the parameters involved in the tasks.

[1]  Ran,et al.  The correspondence problem , 1998 .

[2]  E. John,et al.  Evoked-Potential Correlates of Stimulus Uncertainty , 1965, Science.

[3]  Yu-Sheng Lu,et al.  High‐order variable‐structure disturbance estimators with applications to harmonic drive systems , 2009 .

[4]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[5]  K. Dautenhahn,et al.  The correspondence problem , 2002 .

[6]  Rüdiger Dillmann,et al.  Incremental Learning of Tasks From User Demonstrations, Past Experiences, and Vocal Comments , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[7]  Kuu-Young Young,et al.  Intention Learning From Human Demonstration , 2011, J. Inf. Sci. Eng..

[8]  Xin-She Yang,et al.  Introduction to Algorithms , 2021, Nature-Inspired Optimization Algorithms.

[9]  Aude Billard,et al.  On Learning, Representing, and Generalizing a Task in a Humanoid Robot , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Katsushi Ikeuchi,et al.  Extraction of essential interactions through multiple observations of human demonstrations , 2003, IEEE Trans. Ind. Electron..

[11]  Rüdiger Dillmann,et al.  Incremental Learning of Task Sequences with Information-Theoretic Metrics , 2006, EUROS.

[12]  S. Chiba,et al.  Dynamic programming algorithm optimization for spoken word recognition , 1978 .

[13]  Hsin-Chia Fu,et al.  Learning by demonstration for tool-handling task , 2010, Proceedings of SICE Annual Conference 2010.

[14]  Hiroshi Kimura,et al.  Extraction of fine motion through multiple observations of human demonstration by DP matching and combined template matching , 2001, Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No.01TH8591).