Learning the Geometric Meaning of Symbolic Abstractions for Manipulation Planning

We present an approach for learning a mapping between geometric states and logical predicates. This mapping is a necessary part of any robotic system that requires task-level reasoning and path planning. Consider a robot tasked with putting a number of cups on a tray. To achieve the goal the robot needs to find positions for all the objects, and if necessary may need to stack one cup inside another to get them all on the tray. This requires translating back and forth between symbolic states that the planner uses such as “stacked(cup1,cup2)” and geometric states representing the positions and poses of the objects. The mapping we learn in this paper achieves this translation. We learn it from labelled examples, and significantly, learn a representation that can be used in both the forward (from geometric to symbolic) and reverse directions. This enables us to build symbolic representations of scenes the robot observes, and also to translate a desired symbolic state from a plan into a geometric state that the robot can actually achieve through manipulation. We also show how the approach can be used to generate significantly different geometric solutions to support backtracking. We evaluate the work both in simulation and on a robot arm.

[1]  Libor Preucil,et al.  European Robotics Symposium 2008 , 2008 .

[2]  Thierry Siméon,et al.  Visibility-based probabilistic roadmaps for motion planning , 2000, Adv. Robotics.

[3]  Larry S. Davis,et al.  Automatic online tuning for fast Gaussian summation , 2008, NIPS.

[4]  Steven M. LaValle,et al.  RRT-connect: An efficient approach to single-query path planning , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[5]  D. W. Scott,et al.  Multivariate Density Estimation, Theory, Practice and Visualization , 1992 .

[6]  Patric Jensfelt,et al.  Learning spatial relations from functional simulation , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Malte Helmert,et al.  The Fast Downward Planning System , 2006, J. Artif. Intell. Res..

[8]  Craig A. Knoblock,et al.  PDDL-the planning domain definition language , 1998 .

[9]  Rüdiger Dillmann,et al.  Learning of generalized manipulation strategies in the context of Programming by Demonstration , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.

[10]  Avrim Blum,et al.  Fast Planning Through Planning Graph Analysis , 1995, IJCAI.

[11]  Rachid Alami,et al.  aSyMov: A Planner That Deals with Intricate Symbolic and Geometric Problems , 2003, ISRR.

[12]  Leslie Pack Kaelbling,et al.  Hierarchical task and motion planning in the now , 2011, 2011 IEEE International Conference on Robotics and Automation.

[13]  Benjamin Rosman,et al.  Learning spatial relationships between objects , 2011, Int. J. Robotics Res..

[14]  Patric Jensfelt,et al.  Mechanical support as a spatial abstraction for mobile robots , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Yaxin Bi,et al.  Combining rough decisions for intelligent text mining using Dempster’s rule , 2006, Artificial Intelligence Review.