Acquisition of Behavioral Dynamics for Vision Based Mobile Robot Navigation from Demonstrations

Abstract The design of robust vision based robot navigation behaviors remains a challenge in mobile robotics as it requires a coherent mapping between a complex visual perception and its associated robot motion. This contribution proposes a framework to learn this general relationship from a small set of representative demonstrations in which an expert manually navigates the robot through its environment. Behaviors are represented by a dynamic system that ties the perceptions to actions. The state of the behavioral dynamics is characterized by a small set of visual features extracted from an omnidirectional image of the local environment. Recording, learning and generalization takes place in the product space of visual features and robot controls. Training instances are recorded for three distinctive behaviors namely corridor following, obstacle avoidance and homing. Behavioral dynamics are represented as Gaussian mixture models, parameters of which are identified from the recorded demonstrations. The learned behaviors are able to accomplish the task across a diverse set of initial poses and situations. In order to realize global navigation, the behaviors are coordinated via hand designed arbitration or command fusion schemes. The experimental validation of the proposed approach confirms that the acquired visual navigation behaviors in cooperation accomplish robust navigation in indoor environments.

[1]  Sergio Monteiro,et al.  Attractor dynamics approach to formation control: theory and application , 2010, Auton. Robots.

[2]  Monica N. Nicolescu,et al.  Learning Behavior Fusion Estimation from Demonstration , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[3]  Philippe Gaussier,et al.  A Study of Two Complementary Encoding Strategies Based on Learning by Demonstration for Autonomous Navigation Task , 2010, EpiRob.

[4]  Torsten Bertram,et al.  Scenario and context specific visual robot behavior learning , 2011, 2011 IEEE International Conference on Robotics and Automation.

[5]  Henrik I. Christensen,et al.  Behaviour Coordination in Structured Environments , 2022 .

[6]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[7]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[8]  Aude Billard,et al.  Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models , 2011, IEEE Transactions on Robotics.

[9]  Maja J. Mataric,et al.  Learning in behavior-based multi-robot systems: policies, models, and other agents , 2001, Cognitive Systems Research.

[10]  Torsten Bertram,et al.  Ensemble of experts for robust floor-obstacle segmentation of omnidirectional images for mobile robot visual navigation , 2011, 2011 IEEE International Conference on Robotics and Automation.

[11]  Gregor Schöner,et al.  Dynamics of behavior: Theory and applications for autonomous robot architectures , 1995, Robotics Auton. Syst..

[12]  Brett R. Fajen,et al.  Visual navigation and obstacle avoidance using a steering potential function , 2006, Robotics Auton. Syst..

[13]  J. Andrew Bagnell,et al.  Improving Robot Navigation Through Self-Supervised Online Learning , 2006, Robotics: Science and Systems.

[14]  W. H. Warren The dynamics of perception and action. , 2006, Psychological review.