Toward a library of manipulation actions based on semantic object-action relations

The goal of this study is to provide an architecture for a generic definition of robot manipulation actions. We emphasize that the representation of actions presented here is “procedural”. Thus, we will define the structural elements of our action representations as execution protocols. To achieve this, manipulations are defined using three levels. The toplevel defines objects, their relations and the actions in an abstract and symbolic way. A mid-level sequencer, with which the action primitives are chained, is used to structure the actual action execution, which is performed via the bottom level. This (lowest) level collects data from sensors and communicates with the control system of the robot. This method enables robot manipulators to execute the same action in different situations i.e. on different objects with different positions and orientations. In addition, two methods of detecting action failure are provided which are necessary to handle faults in system. To demonstrate the effectiveness of the proposed framework, several different actions are performed on our robotic setup and results are shown. This way we are creating a library of human-like robot actions, which can be used by higher-level task planners to execute more complex tasks.

[1]  Eren Erdal Aksoy,et al.  Execution of a dual-object (pushing) action with semantic event chains , 2011, 2011 11th IEEE-RAS International Conference on Humanoid Robots.

[2]  Minija Tamosiunaite,et al.  Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting , 2012, IEEE Transactions on Robotics.

[3]  Danica Kragic,et al.  Learning Task Models from Multiple Human Demonstrations , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[4]  Lorenzo Sciavicco,et al.  The parallel approach to force/position control of robotic manipulators , 1993, IEEE Trans. Robotics Autom..

[5]  Yoshihiko Nakamura,et al.  Stochastic Model of Imitating a New Observed Motion Based on the Acquired Motion Primitives , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Ales Ude,et al.  A Simple Ontology of Manipulation Actions Based on Hand-Object Relations , 2013, IEEE Transactions on Autonomous Mental Development.

[7]  Tobias Luksch,et al.  A Dynamical Systems Approach to Adaptive Sequencing of Movement Primitives , 2012, ROBOTIK.

[8]  Ales Ude,et al.  Trajectory generation from noisy positions of object features for teaching robot paths , 1993, Robotics Auton. Syst..

[9]  Rüdiger Dillmann,et al.  Incremental Learning of Tasks From User Demonstrations, Past Experiences, and Vocal Comments , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Eren Erdal Aksoy,et al.  A modular system architecture for online parallel vision pipelines , 2012, 2012 IEEE Workshop on the Applications of Computer Vision (WACV).

[11]  Eren Erdal Aksoy,et al.  Categorizing object-action relations from semantic scene graphs , 2010, 2010 IEEE International Conference on Robotics and Automation.

[12]  Jun Nakanishi,et al.  Movement imitation with nonlinear dynamical systems in humanoid robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[13]  Eren Erdal Aksoy,et al.  Learning the semantics of object–action relations by observation , 2011, Int. J. Robotics Res..

[14]  Aude Billard,et al.  On Learning, Representing, and Generalizing a Task in a Humanoid Robot , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[15]  Rüdiger Dillmann,et al.  Advances in Robot Programming by Demonstration , 2010, KI - Künstliche Intelligenz.