Enabling UAV Navigation with Sensor and Environmental Uncertainty in Cluttered and GPS-Denied Environments

Unmanned Aerial Vehicles (UAV) can navigate with low risk in obstacle-free environments using ground control stations that plan a series of GPS waypoints as a path to follow. This GPS waypoint navigation does however become dangerous in environments where the GPS signal is faulty or is only present in some places and when the airspace is filled with obstacles. UAV navigation then becomes challenging because the UAV uses other sensors, which in turn generate uncertainty about its localisation and motion systems, especially if the UAV is a low cost platform. Additional uncertainty affects the mission when the UAV goal location is only partially known and can only be discovered by exploring and detecting a target. This navigation problem is established in this research as a Partially-Observable Markov Decision Process (POMDP), so as to produce a policy that maps a set of motion commands to belief states and observations. The policy is calculated and updated on-line while flying with a newly-developed system for UAV Uncertainty-Based Navigation (UBNAV), to navigate in cluttered and GPS-denied environments using observations and executing motion commands instead of waypoints. Experimental results in both simulation and real flight tests show that the UAV finds a path on-line to a region where it can explore and detect a target without colliding with obstacles. UBNAV provides a new method and an enabling technology for scientists to implement and test UAV navigation missions with uncertainty where targets must be detected using on-line POMDP in real flight scenarios.

[1]  R. N. Smith,et al.  Extending persistent monitoring by combining ocean models and Markov Decision Processes , 2012, 2012 Oceans.

[2]  Charles Lesire,et al.  Planning for perception and perceiving for decision: POMDP-like online target detection and recognition for autonomous UAVs , 2012 .

[3]  Francisco José Madrid-Cuevas,et al.  Automatic generation and detection of highly reliable fiducial markers under occlusion , 2014, Pattern Recognit..

[4]  Jan Faigl,et al.  AR-Drone as a Platform for Robotic Research and Education , 2011, Eurobot Conference.

[5]  Sergei Lupashin,et al.  Synchronizing the motion of a quadrocopter to music , 2010, 2010 IEEE International Conference on Robotics and Automation.

[6]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[7]  O. Trindade,et al.  MISSION-ORIENTED SENSOR ARRAYS AND UAVs – A CASE STUDY ON ENVIRONMENTAL MONITORING , 2015 .

[8]  Hanna Kurniawati,et al.  TAPIR: A software toolkit for approximating and adapting POMDP solutions online , 2014, ICRA 2014.

[9]  Joelle Pineau,et al.  Point-based value iteration: An anytime algorithm for POMDPs , 2003, IJCAI.

[10]  Kris K. Hauser,et al.  Randomized Belief-Space Replanning in Partially-Observable Continuous Spaces , 2010, WAFR.

[11]  John N. Tsitsiklis,et al.  Implementation of efficient algorithms for globally optimal trajectories , 1998, IEEE Trans. Autom. Control..

[12]  Antonio Barrientos,et al.  An Aerial–Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas , 2013, Sensors.

[13]  Luis Felipe Gonzalez Robust evolutionary methods for multi-objective and multdisciplinary design optimisation in aeronautics , 2005 .

[14]  Joelle Pineau,et al.  Anytime Point-Based Approximations for Large POMDPs , 2006, J. Artif. Intell. Res..

[15]  Markus Hehn,et al.  A flying inverted pendulum , 2011, 2011 IEEE International Conference on Robotics and Automation.

[16]  Raffaello D'Andrea,et al.  Quadrocopter ball juggling , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Bérénice Mettler,et al.  Survey of Motion Planning Literature in the Presence of Uncertainty: Considerations for UAV Guidance , 2012, J. Intell. Robotic Syst..

[18]  Hanna Kurniawati,et al.  An Online POMDP Solver for Uncertainty Planning in Dynamic Environment , 2013, ISRR.

[19]  Raffaello D'Andrea,et al.  A simple learning strategy for high-speed quadrocopter multi-flips , 2010, 2010 IEEE International Conference on Robotics and Automation.

[20]  K. Klamroth,et al.  Path Planning for UAVs in the Presence of Threat Zones Using Probabilistic Modeling , 2005 .

[21]  Reid G. Simmons,et al.  Heuristic Search Value Iteration for POMDPs , 2004, UAI.

[22]  Joel Veness,et al.  Monte-Carlo Planning in Large POMDPs , 2010, NIPS.

[23]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[24]  David Hsu,et al.  Motion planning under uncertainty for robotic tasks with long time horizons , 2010, Int. J. Robotics Res..

[25]  L. F. Gonzalez,et al.  Hybrid-Game Strategies for multi-objective design optimization in engineering , 2011 .

[26]  Edwin K. P. Chong,et al.  UAV Path Planning in a Dynamic Environment via Partially Observable Markov Decision Process , 2013, IEEE Transactions on Aerospace and Electronic Systems.

[27]  Haibin Duan,et al.  Three-dimension path planning for UCAV using hybrid meta-heuristic ACO-DE algorithm , 2010, Simul. Model. Pract. Theory.

[28]  Sebastian Thrun,et al.  Probabilistic robotics , 2002, CACM.

[29]  Ryan N. Smith,et al.  Wind-energy based path planning for Unmanned Aerial Vehicles using Markov Decision Processes , 2013, 2013 IEEE International Conference on Robotics and Automation.

[30]  Antonio Barrientos,et al.  Mini-UAV Based Sensory System for Measuring Environmental Variables in Greenhouses , 2015, Sensors.