Joint-action for humans and industrial robots for assembly tasks

This paper presents a concept of a smart working environment designed to allow true joint-actions of humans and industrial robots. The proposed system perceives its environment with multiple sensor modalities and acts in it with an industrial robot manipulator to assemble capital goods together with a human worker. In combination with the reactive behavior of the robot, safe collaboration between the human and the robot is possible. Furthermore, the system anticipates human behavior, based on knowledge databases and decision processes, ensuring an effective collaboration between the human and robot. As a proof of concept, we introduce a use case where an arm is assembled and mounted on a robotpsilas body.

[1]  Michael Beetz,et al.  CoTeSys—Cognition for Technical Systems , 2010, KI - Künstliche Intelligenz.

[2]  Sonja Stork,et al.  Kognitive Assistenzsysteme in der Manuellen Montage , 2007 .

[3]  G. Knoblich,et al.  Action coordination in groups and individuals: learning anticipatory control. , 2003, Journal of experimental psychology. Learning, memory, and cognition.

[4]  Michael F. Zäh,et al.  Towards the Cognitive Factory , 2007 .

[5]  M. Goebl,et al.  A Real-Time-capable Hard-and Software Architecture for Joint Image and Knowledge Processing in Cognitive Automobiles , 2007, 2007 IEEE Intelligent Vehicles Symposium.

[6]  T. Laengle,et al.  Cooperation in human-robot-teams , 1997, ISIE '97 Proceeding of the IEEE International Symposium on Industrial Electronics.

[7]  Manolis I. A. Lourakis,et al.  Real-Time Tracking of Multiple Skin-Colored Objects with a Possibly Moving Camera , 2004, ECCV.

[8]  Gerhard Rigoll,et al.  Static and Dynamic Hand-Gesture Recognition for Augmented Reality Applications , 2007, HCI.

[9]  Terrence Fong,et al.  The human-robot interaction operating system , 2006, HRI '06.

[10]  Alois Knoll,et al.  Integrating Language, Vision and Action for Human Robot Dialog Systems , 2007, HCI.

[11]  Lynne E. Parker,et al.  Peer-to-Peer Human-Robot Teaming through Reconfigurable Schemas , 2006, AAAI Spring Symposium: To Boldly Go Where No Human-Robot Team Has Gone Before.

[12]  Andrea Lockerd Thomaz,et al.  Tutelage and Collaboration for Humanoid Robots , 2004, Int. J. Humanoid Robotics.

[13]  Heinz Wörn,et al.  A novel approach to proactive human-robot cooperation , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[14]  Tamim Asfour,et al.  A cognitive architecture for a humanoid robot: a first approach , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[15]  Cynthia Breazeal,et al.  Effects of anticipatory action on human-robot teamwork: Efficiency, fluency, and perception of team , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[16]  N. Emery,et al.  The eyes have it: the neuroethology, function and evolution of social gaze , 2000, Neuroscience & Biobehavioral Reviews.

[17]  Gerhard Rigoll,et al.  Surveillance and Activity Recognition with Depth Information , 2007, 2007 IEEE International Conference on Multimedia and Expo.

[18]  Donald D. Dudenhoeffer,et al.  Evaluation of supervisory vs. peer-peer interaction with human-robot teams , 2004, 37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the.

[19]  Alois Knoll,et al.  Mutual Information-Based 3D Object Tracking , 2008, International Journal of Computer Vision.

[20]  Helge J. Ritter,et al.  Integrating Context-Free and Context-Dependent Attentional Mechanisms for Gestural Object Reference , 2003, ICVS.

[21]  H. Bekkering,et al.  Joint action: bodies and minds moving together , 2006, Trends in Cognitive Sciences.

[22]  Aaron Edsinger,et al.  Robot manipulation in human environments , 2007 .

[23]  Candace L. Sidner,et al.  A First Experiment in Engagement for Human-Robot Interaction in Hosting Activities , 2005 .

[24]  Alois Knoll,et al.  A Wait-free Realtime System for Optimal Distribution of Vision Tasks on Multicore Architectures , 2008, ICINCO-RA.