A Hybrid Reactive Navigation Strategy for a Non-holonomic Mobile Robot in Cluttered Environments

The reactive collision-free navigation problems are challenging due to the limitation of the environment information. In this paper, we propose a novel hybrid reactive navigation strategy for non-holonomic mobile robots in cluttered environments. Our strategy combines both reactive navigation and Q-learning method. We intend to keep the good characteristics of reactive navigation algorithm and Q-learning and overcome the shortcomings of only relying on one of them. The performance of the proposed strategy is verified by computer simulations, and good results are obtained.

[1]  Andrey V. Savkin,et al.  A globally converging algorithm for reactive robot navigation among moving and deforming obstacles , 2015, Autom..

[2]  Andrey V. Savkin,et al.  Decentralized control for mobile robotic sensor network self-deployment: barrier and sweep coverage problems , 2011, Robotica.

[3]  S. Arimoto,et al.  Path Planning Using a Tangent Graph for Mobile Robots Among Polygonal and Curved Obstacles , 1992 .

[4]  Yong Zhang,et al.  Reinforcement Learning in Robot Path Optimization , 2012, J. Softw..

[5]  Daoyi Dong,et al.  Robust Quantum-Inspired Reinforcement Learning for Robot Navigation , 2012, IEEE/ASME Transactions on Mechatronics.

[6]  Lixin Li,et al.  Autonomous Navigation Strategy in Mobile Robot , 2013, J. Comput..

[7]  M. Obayashi,et al.  A robust reinforcement learning using the concept of sliding mode control , 2009, Artificial Life and Robotics.

[8]  Andrey V. Savkin,et al.  Reactive and the shortest path navigation of a wheeled mobile robot in cluttered environments , 2012, Robotica.

[9]  Michael Defoort,et al.  Performance-based reactive navigation for non-holonomic mobile robots , 2009, Robotica.

[10]  Andrey V. Savkin,et al.  A method for guidance and control of an autonomous vehicle in problems of border patrolling and obstacle avoidance , 2011, Autom..

[11]  Frank L. Lewis,et al.  Reinforcement learning and optimal adaptive control: An overview and implementation examples , 2012, Annu. Rev. Control..

[12]  Daniel Campos,et al.  Localization and Navigation of an Omnidirectional Mobile Robot: The Robot@Factory Case Study , 2016, IEEE Revista Iberoamericana de Tecnologias del Aprendizaje.

[13]  Mohammad A. Jaradat,et al.  Reinforcement based mobile robot navigation in dynamic environment , 2011 .

[14]  A. Matveev,et al.  Algorithms for collision-free navigation of mobile robots in complex cluttered environments: a survey , 2014, Robotica.

[15]  Andrey V. Savkin,et al.  Seeking a path through the crowd: Robot navigation in unknown dynamic environments with moving obstacles based on an integrated environment representation , 2014, Robotics Auton. Syst..

[16]  Andrey V. Savkin,et al.  A strategy for safe 3D navigation of non-holonomic robots among moving obstacles , 2017, Robotica.

[17]  D. Struik Lectures on classical differential geometry , 1951 .

[18]  Jahanzaib Shabbir,et al.  A Survey of Deep Learning Techniques for Mobile Robot Applications , 2018, ArXiv.

[19]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[20]  L. Dubins On Curves of Minimal Length with a Constraint on Average Curvature, and with Prescribed Initial and Terminal Positions and Tangents , 1957 .

[21]  Ruihong Yu,et al.  Autonomous navigation research for mobile robot , 2012, Proceedings of the 10th World Congress on Intelligent Control and Automation.

[22]  Sebastian Thrun,et al.  An approach to learning mobile robot navigation , 1995, Robotics Auton. Syst..

[23]  Andrey V. Savkin,et al.  Safe Robot Navigation Among Moving and Steady Obstacles , 2015 .