Acquiring hand-action models by attention point analysis

This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and represent such hand actions in symbolic task models. We propose a framework of such models by efficiently integrating multiple observations based on attention points; we then evaluate the model by using a human-form robot. We propose a two-step observation mechanism. At the first step, the system roughly observes the entire sequence of the human demonstration, builds a rough task model and extracts attention points (APs). The attention points indicate the time and position in the observation sequence that requires further detailed analysis. At the second step, the system closely examines the sequence around the APs and the obtained attribute values for the task model, such as what to grasp, which hand to be used, or what is the precise trajectory of the manipulated object. We implemented this system on a human form robot and demonstrated its effectiveness.

[1]  Katsushi Ikeuchi,et al.  Recognition of human task by attention point analysis , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[2]  Katsushi Ikeuchi,et al.  Symbolic representation of trajectories for skill generation , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[3]  Katsushi Ikeuchi,et al.  Task-model based human robot cooperation using vision , 1999, Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289).

[4]  Mark R. Cutkosky,et al.  On grasp choice, grasp models, and the design of hands for manufacturing tasks , 1989, IEEE Trans. Robotics Autom..

[5]  H. Kimura,et al.  Acquiring hand-action models in task and behavior levels by a learning robot through observing human demonstrations , 2000 .

[6]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[7]  Katsushi Ikeuchi,et al.  Toward an assembly plan from observation. I. Task recognition with polyhedral objects , 1994, IEEE Trans. Robotics Autom..

[8]  Mark D. Wheeler,et al.  Automatic Modeling and Localization for Object Recognition , 1996 .

[9]  Kate Knill,et al.  Speaker dependent keyword spotting for accessing stored speech , 1994 .