Leveraging experience in lazy search

Lazy graph search algorithms are efficient at solving motion planning problems where edge evaluation is the computational bottleneck. These algorithms work by lazily computing the shortest potentially feasible path, evaluating edges along that path, and repeating until a feasible path is found. The order in which edges are selected is critical to minimizing the total number of edge evaluations: a good edge selector chooses edges that are not only likely to be invalid, but also eliminates future paths from consideration. We wish to learn such a selector by leveraging prior experience. We formulate this problem as a Markov Decision Process (MDP) on the state of the search problem. While solving this large MDP is generally intractable, we show that we can compute oracular selectors that can solve the MDP during training. With access to such oracles, we use imitation learning to find effective policies. If new search problems are sufficiently similar to problems solved during training, the learned policy will choose a good edge evaluation ordering and solve the motion planning problem quickly. We evaluate our algorithms on a wide range of 2D and 7D problems and show that the learned selector outperforms baseline commonly used heuristics.

[1]  Nolan Wagener,et al.  Fast Policy Learning through Imitation and Reinforcement , 2018, UAI.

[2]  Gireeja Ranade,et al.  Data-driven planning via imitation learning , 2017, Int. J. Robotics Res..

[3]  Rajeev Motwani,et al.  Path planning in expansive configuration spaces , 1997, Proceedings of International Conference on Robotics and Automation.

[4]  Emilio Frazzoli,et al.  Free-configuration biased sampling for motion planning , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Kris Hauser,et al.  Lazy collision checking in asymptotically-optimal motion planning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[6]  J. Andrew Bagnell,et al.  Reinforcement and Imitation Learning via Interactive No-Regret Learning , 2014, ArXiv.

[7]  Sergey Levine,et al.  PLATO: Policy learning using adaptive trajectory optimization , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Byron Boots,et al.  Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction , 2017, ICML.

[9]  Lydia E. Kavraki,et al.  Path planning using lazy PRM , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[10]  Siddhartha S. Srinivasa,et al.  Bayesian Active Edge Evaluation on Expensive Graphs , 2018, IJCAI.

[11]  Daniel D. Lee,et al.  Learning high-dimensional Mixture Models for fast collision detection in Rapidly-Exploring Random Trees , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[12]  Sebastian Scherer,et al.  Learning Heuristic Search via Imitation , 2017, CoRL.

[13]  Lydia E. Kavraki,et al.  A two level fuzzy PRM for manipulation planning , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[14]  Siddhartha S. Srinivasa,et al.  Pareto-optimal search over configuration space beliefs for anytime motion planning , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[15]  Oliver Brock,et al.  Sampling-Based Motion Planning Using Predictive Models , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[16]  Siddhartha S. Srinivasa,et al.  A Unifying Formalism for Shortest Path Problems with Expensive Edge Evaluations via Lazy Best-First Search over Paths with Edge Selectors , 2016, ICAPS.

[17]  Sergey Levine,et al.  Learning from the hindsight plan — Episodic MPC improvement , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[18]  Byron Boots,et al.  Truncated Horizon Policy Search: Combining Reinforcement Learning & Imitation Learning , 2018, ICLR.

[19]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[20]  Siddhartha S. Srinivasa,et al.  The Provable Virtue of Laziness in Motion Planning , 2017, ICAPS.

[21]  Siddhartha S. Srinivasa,et al.  Near-Optimal Edge Evaluation in Explicit Generalized Binomial Graphs , 2017, NIPS.

[22]  Geoffrey J. Gordon,et al.  A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning , 2010, AISTATS.

[23]  Siddhartha S. Srinivasa,et al.  Lazy Receding Horizon A* for Efficient Path Planning in Graphs with Expensive-to-Evaluate Edges , 2018, ICAPS.

[24]  Maxim Likhachev,et al.  Planning Single-Arm Manipulations with N-Arm Robots , 2014, SOCS.

[25]  Maxim Likhachev,et al.  Heuristic Search on Graphs with Existence Priors for Expensive-to-Evaluate Edges , 2017, ICAPS.

[26]  J. Schwartz,et al.  On the “piano movers'” problem I. The case of a two‐dimensional rigid polygonal body moving amidst polygonal barriers , 1983 .

[27]  Dinko Osmankovic,et al.  Burs of free C-space: A novel structure for path planning , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[28]  Geoffrey J. Gordon Stable Function Approximation in Dynamic Programming , 1995, ICML.