Human Robot Collaboration to Reach a Common Goal in an Assembly Process

Enabling robotic systems to collaborate with humans is a challenging task, on different levels of abstraction. Such systems need to understand the context under which they operate, by perceiving, planning and reasoning to team up with a human. The robotic system should also have perspective taking capabilities in order to efficiently collaborate with the human. In this work an integrated cognitive architecture for human robot collaboration, that aims to develop perspective taking capabilities using human preferences, is proposed. This is achieved by developing a ‘mental model’ that takes human preferences, the knowledge of the task (including the objects), and the capabilities of the human and the robot. This mental model forms the basis of the cognitive architecture, to perceive, reason and plan in the human-robot collaborative scenario. The robotic platform guided by the cognitive architecture, performs ‘picking’, ‘showing’, ‘placing’ and ‘handover’ actions on real world objects (of interest in the assembly process) in coordination with the human. The goal is to answer the ‘how’ (how a manipulation action should be carried out by the robot in a dynamically changing environment) and the ‘where’ (where the manipulation action should take place) of the assembly process considering/given varying human preferences. We show that the proposed cognitive architecture is capable of answering these questions through various experiments and evaluation.

[1]  Véronique Perdereau,et al.  Cooperative Tasks between Humans and Robots in Industrial Environments , 2012 .

[2]  Moritz Tenorth,et al.  KnowRob: A knowledge processing infrastructure for cognition-enabled robots , 2013, Int. J. Robotics Res..

[3]  Andreas Pichler,et al.  Tracking multiple rigid symmetric and non-symmetric objects in real-time using depth data , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Sara B. Kiesler,et al.  Human Mental Models of Humanoid Robots , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[5]  Heinz Wörn,et al.  A novel approach to proactive human-robot cooperation , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[6]  Dmitry Berenson,et al.  Human-robot collaborative manipulation planning using early prediction of human motion , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Brian Scassellati,et al.  Theory of Mind for a Humanoid Robot , 2002, Auton. Robots.

[8]  Yiannis Demiris,et al.  Markerless perspective taking for humanoid robots in unconstrained environments , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[10]  Kerstin Dautenhahn,et al.  Methodology & Themes of Human-Robot Interaction: A Growing Research Field , 2007 .

[11]  Lydia E. Kavraki,et al.  The Open Motion Planning Library , 2012, IEEE Robotics & Automation Magazine.

[12]  J. Gregory Trafton,et al.  Enabling effective human-robot interaction using perspective-taking in robots , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[13]  Albrecht Schmidt,et al.  Implicit human computer interaction through context , 2000, Personal Technologies.

[14]  Rachid Alami,et al.  Towards a Task-Aware Proactive Sociable Robot Based on Multi-state Perspective-Taking , 2013, Int. J. Soc. Robotics.

[15]  Christoph Heindl,et al.  Action recognition for human robot interaction in industrial applications , 2015, 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS).

[16]  Michael Beetz,et al.  Towards semantic robot description languages , 2011, 2011 IEEE International Conference on Robotics and Automation.