Planning-based prediction for pedestrians

We present a novel approach for determining robot movements that efficiently accomplish the robot's tasks while not hindering the movements of people within the environment. Our approach models the goal-directed trajectories of pedestrians using maximum entropy inverse optimal control. The advantage of this modeling approach is the generality of its learned cost function to changes in the environment and to entirely different environments. We employ the predictions of this model of pedestrian trajectories in a novel incremental planner and quantitatively show the improvement in hindrance-sensitive robot trajectory planning provided by our approach.

[1]  Edsger W. Dijkstra,et al.  A note on two problems in connexion with graphs , 1959, Numerische Mathematik.

[2]  R. E. Kalman,et al.  New Results in Linear Filtering and Prediction Theory , 1961 .

[3]  Masaki Hayashi,et al.  On motion planning of mobile robots which coexist and cooperate with human , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[4]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[5]  David C. Hogg,et al.  Learning Variable-Length Markov Models of Behavior , 2001, Comput. Vis. Image Underst..

[6]  Panos E. Trahanias,et al.  Predictive autonomous robot navigation , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Wolfram Burgard,et al.  Learning motion patterns of persons for mobile service robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[8]  Rajmohan Madhavan,et al.  Moving object prediction for off-road autonomous navigation , 2003, SPIE Defense + Commercial Sensing.

[9]  A 2D COLLISION WARNING FRAMEWORK BASED ON A MONTE CARLO APPROACH , 2004 .

[10]  Raj Madhavan,et al.  A hierarchical, multi-resolutional moving object prediction approach for autonomous on-road driving , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[11]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.

[12]  Thierry Fraichard,et al.  Safe motion planning in dynamic environments , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Anthony Stentz,et al.  Field D*: An Interpolation-Based Path Planner and Replanner , 2005, ISRR.

[14]  Wolfram Burgard,et al.  Probabilistic Robotics (Intelligent Robotics and Autonomous Agents) , 2005 .

[15]  Christian Laugier,et al.  Intentional Motion Online Learning and Prediction , 2005, FSR.

[16]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[17]  Robert A. MacLachlan,et al.  Tracking Moving Objects From a Moving Vehicle Using a Laser Scanner , 2006 .

[18]  Christian Laugier,et al.  Intentional motion on-line learning and prediction , 2008, Machine Vision and Applications.

[19]  Eyal Amir,et al.  Bayesian Inverse Reinforcement Learning , 2007, IJCAI.

[20]  Anind K. Dey,et al.  Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.