Joint Action for Humans and Industrial Robots Progress Report

This progress report summarises the activities and achievements that have been carried out during the first funding period of the CoTeSys project Joint Action for Humans and Industrial Robots JAHIR. As stated in the first project proposal the major goals within JAHIR are to investigate the cognitive basis for true joint human-robot interaction, and to use this knowledge to establish a demonstrator platform within the cognitive factory which can be used by other projects. This document therefore contains a succinct overview of the projected application scenario together with the layout of the assembly cell. A general idea of the software architecture is presented and used input devices are listed. Finally current cross activities within the cluster and other project relevant activities are described.

[1]  Alois Knoll,et al.  Mutual Information-Based 3D Object Tracking , 2008, International Journal of Computer Vision.

[2]  Alois Knoll,et al.  A unifying software architecture for model-based visual tracking , 2008, Electronic Imaging.

[3]  Gerhard Rigoll,et al.  Robust Multi-Modal Group Action Recognition in Meetings from Disturbed Videos with the Asynchronous Hidden Markov Model , 2007, 2007 IEEE International Conference on Image Processing.

[4]  Gerhard Rigoll,et al.  Improved Image Segmentation using Photonic Mixer Devices , 2007, 2007 IEEE International Conference on Image Processing.

[5]  Alois Knoll,et al.  Integrating Language, Vision and Action for Human Robot Dialog Systems , 2007, HCI.

[6]  Gerhard Rigoll,et al.  Static and Dynamic Hand-Gesture Recognition for Augmented Reality Applications , 2007, HCI.

[7]  Gerhard Rigoll,et al.  Surveillance and Activity Recognition with Depth Information , 2007, 2007 IEEE International Conference on Multimedia and Expo.

[8]  Alois Knoll,et al.  Human-Robot dialogue for joint construction tasks , 2006, ICMI '06.

[9]  Oliver Brock,et al.  Elastic Roadmaps: Globally Task-Consistent Motion for Autonomous Mobile Manipulation in Dynamic Environments , 2006, Robotics: Science and Systems.

[10]  H. Bekkering,et al.  Joint action: bodies and minds moving together , 2006, Trends in Cognitive Sciences.

[11]  Alois Knoll,et al.  Fully Automatic Real-Time 3D Object Tracking using Active Contour and Appearance Models , 2006, J. Multim..

[12]  Alois Knoll,et al.  An Efficient and Robust Real-Time Contour Tracking System , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[13]  G. LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[14]  Dana S. Nau,et al.  SHOP2: An HTN Planning System , 2003, J. Artif. Intell. Res..

[15]  Helge J. Ritter,et al.  Integrating context-free and context-dependent attentional mechanisms for gestural object reference , 2003, Machine Vision and Applications.

[16]  Michael Beetz,et al.  Fast image-based object localization in natural scenes , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Patrick Doherty,et al.  TALplanner: A temporal logic based forward chaining planner , 2001, Annals of Mathematics and Artificial Intelligence.

[18]  N. Emery,et al.  The eyes have it: the neuroethology, function and evolution of social gaze , 2000, Neuroscience & Biobehavioral Reviews.

[19]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[20]  David G. Lowe,et al.  Shape indexing using approximate nearest-neighbour search in high-dimensional spaces , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[21]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[22]  U. Lindemann Methodische Entwicklung technischer Produkte , 2009 .

[23]  Aaron Edsinger,et al.  Robot manipulation in human environments , 2007 .

[24]  Gunther Reinhart,et al.  Roboter gestützte kognitive Montagesysteme - Montagesysteme der Zukunft , 2007 .

[25]  Michael F. Zäh,et al.  Towards the Cognitive Factory , 2007 .

[26]  Gunther Reinhart,et al.  JAHIR - Joint Action for Humans and Industrial Robots , 2007 .

[27]  Austin Tate,et al.  O-Plan2: an Open Architecture for Command, Planning and Control , 2006 .

[28]  L. Itti,et al.  Modeling the influence of task on attention , 2005, Vision Research.

[29]  Katharina Beumelburg,et al.  Fähigkeitsorientierte Montageablaufplanung in der direkten Mensch-Roboter-Kooperation , 2005 .

[30]  O. Brock,et al.  Elastic Strips: A Framework for Motion Generation in Human Environments , 2002, Int. J. Robotics Res..

[31]  R. D. Schraft,et al.  team@work - Mensch-Roboter-Kooperation in der Montage , 2003 .

[32]  Fahiem Bacchus,et al.  Using temporal logics to express search control knowledge for planning , 2000, Artif. Intell..

[33]  Greg Welch,et al.  Welch & Bishop , An Introduction to the Kalman Filter 2 1 The Discrete Kalman Filter In 1960 , 1994 .