Learning and performing place-based mobile manipulation

What it means for an object to be ‘within reach’ depends very much on the morphology and skills of a robot. In this paper, we enable a mobile manipulation robot to learn a concept of PLACE from which successful manipulation is possible through trial-and-error interaction with the environment. Due to this developmental approach, PLACE is very much grounded in observed experience, and takes the hardware and skills of the robot into account. During task-execution, this model is used to determine optimal grasp places in a least-commitment approach. This PLACE takes into account uncertainties in both robot and target object positions, and leads to more robust behavior.

[1]  E. Torres-Jara,et al.  Challenges for Robot Manipulation in Human Environments , 2006 .

[2]  Charles C. Kemp,et al.  Challenges for robot manipulation in human environments [Grand Challenges of Robotics] , 2007, IEEE Robotics & Automation Magazine.

[3]  Dmitry Berenson,et al.  Grasp planning in complex scenes , 2007, 2007 7th IEEE-RAS International Conference on Humanoid Robots.

[4]  Alcherio Martinoli,et al.  A quantitative method for comparing trajectories of mobile robots using point distribution models , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Michael Beetz,et al.  3D model selection from an internet database for robotic vision , 2009, 2009 IEEE International Conference on Robotics and Automation.

[6]  Jun Nakanishi,et al.  Movement imitation with nonlinear dynamical systems in humanoid robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[7]  Michael Beetz,et al.  Generality and legibility in mobile manipulation , 2010, Auton. Robots.

[8]  Arne D. Ekstrom,et al.  Cellular networks underlying human spatial navigation , 2003, Nature.

[9]  Benjamin Kuipers,et al.  Bootstrap learning of foundational representations , 2006, Connect. Sci..

[10]  Stephen Hart,et al.  A Framework for Learning Declarative Structure , 2006 .

[11]  Gunnar Rätsch,et al.  Large Scale Multiple Kernel Learning , 2006, J. Mach. Learn. Res..

[12]  Michael Beetz,et al.  Refining the Execution of Abstract Actions with Learned Action Models , 2008, J. Artif. Intell. Res..

[13]  Martin A. Riedmiller,et al.  Learning Situation Dependent Success Rates of Actions in a RoboCup Scenario , 2000, PRICAI.

[14]  Richard T. Vaughan,et al.  The Player/Stage Project: Tools for Multi-Robot and Distributed Sensor Systems , 2003 .

[15]  Edmund H. Durfee,et al.  Abstract Reasoning for Planning and Coordination , 2002, SARA.

[16]  Bernd Radig,et al.  Learning Local Objective Functions for Robust Face Model Fitting , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Gerd Hirzinger,et al.  Capturing robot workspace structure: representing robot capabilities , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.