Towards Task Understanding through Multi-State VisuoSpatial Perspective Taking for Human-Robot Interaction

For a lifelong learning robot, in the context of task understanding, it is important to distinguish the 'meaning' of a task from the 'means' to achieve it. In this paper we will select a set of tasks in a typical Human-Robot interaction scenario such as show, hide, make accessible, etc., and illustrate that visuo-spatial perspective taking can be effectively used to understand such tasks' semantics in terms of 'effect'. The idea is, for understanding the 'ef-fects' the robot analyzes the reachability and visibility of an agent not only from the current state of the agent but also from a set of virtual states, which the agent might attain with different level of efforts from his/its current state. We show that such symbolic understandings of tasks could be generalized to new situations or spatial arrangements, as well as facilitate 'transfer of understanding' among heterogeneous robots. Robot begins to understand the semantics of the task from the first demonstration and continuously refines its understanding with further examples.

[1]  M. Turvey,et al.  Visually perceiving what is reachable. , 1989 .

[2]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[3]  L. S. Mark,et al.  How Do Task Characteristics Affect the Transitions Between Seated and Standing Reaches? , 2001 .

[4]  Katsushi Ikeuchi,et al.  Extraction of essential interactions through multiple observations of human demonstrations , 2003, IEEE Trans. Ind. Electron..

[5]  Rüdiger Dillmann,et al.  Teaching and learning of robot tasks via observation of human performance , 2004, Robotics Auton. Syst..

[6]  L. S. Mark,et al.  Scaling affordances for human reach actions. , 2004, Human movement science.

[7]  Thomas G. Dietterich,et al.  A Study of Explanation-Based Methods for Inductive Learning , 1989, Machine Learning.

[8]  Ignazio Infantino,et al.  A cognitive framework for imitation learning , 2006, Robotics Auton. Syst..

[9]  R. Dillmann,et al.  Towards life-long learning in household robots: The Piagetian approach , 2007, 2007 IEEE 6th International Conference on Development and Learning.

[10]  Manuel Lopes,et al.  Affordance-based imitation learning in robots , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Manuel Lopes,et al.  Modeling affordances using Bayesian networks , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Danica Kragic,et al.  Robot Learning from Demonstration: A Task-level Planning Approach , 2008 .

[13]  Jochen J. Steil,et al.  Task-level imitation learning using variance-based movement optimization , 2009, 2009 IEEE International Conference on Robotics and Automation.

[14]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[15]  Vincent De Sapio,et al.  Robotics-based synthesis of human motion , 2009, Journal of Physiology-Paris.

[16]  R. Alami,et al.  Mightability maps: A perceptual level decisional framework for co-operative and competitive human-robot interaction , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Aude Billard,et al.  Learning Non-linear Multivariate Dynamics of Motion in Robotic Manipulators , 2011, Int. J. Robotics Res..