Three Dimensional Obstacle Avoidance of Autonomous Blimp Flying in Unknown Disturbance

A blimp-type unmanned aerial vehicle (BUAV) maintains its longitudinal motion using buoyancy provided by the air around it. This means the density of a BUAV equals that of the surrounding air. Because of this, the motion of a BUAV is seriously affected by flow disturbances, whose distribution is usually non-uniform and unknown. In addition, the inertia in the heading motion is very large. There is also a strict limitation on the weight of equipment in the BUAV, so most BUAVs are so-called under-actuated robots. From this situation, it can be said that the motion planning of the BUAV considering the stochastic property of the disturbance is needed for obstacle avoidance. In this paper, we propose an approach to the motion planning of the BUAV via the application of Markov decision process (MDP). The proposed approach consists of a method to prepare a discrete MDP model of the BUAV motion and a method to maintain the effect of the unknown wind on the BUAVs motion. The performance of the methods is examined by dynamical simulation of the BUAV in the environment with wind disturbance

[1]  Tamaki Ura,et al.  An on-line adaptation method in a neural network based control system for AUVs , 1995 .

[2]  Tamaki Ura,et al.  Motion planning algorithm for nonholonomic autonomous underwater vehicle in disturbance using reinforcement learning and teaching method , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[3]  Tamaki Ura,et al.  Fast reinforcement learning algorithm for motion planning of nonholonomic autonomous underwater vehicle in disturbance , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Marilena Vendittelli,et al.  Obstacle distance for car-like robots , 1999, IEEE Trans. Robotics Autom..

[5]  Takeo Kanade,et al.  An autonomous blimp for a surveillance system , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[6]  Seul Jung,et al.  Collision avoidance of a mobile robot for moving obstacles based on impedance force control algorithm , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Tamaki Ura,et al.  Fuel-Optimal Guidance And Tracking Control of AUV Under Current Interaction , 2003 .

[8]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[9]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[10]  H. Kawano Method for applying reinforcement learning to motion planning and control of under-actuated underwater vehicle in unknown non-uniform sea flow , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Dana R. Yoerger,et al.  Adaptive sliding control of an experimental underwater vehicle , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[12]  Manabu Yamada,et al.  ROBUST GLOBAL EXPONENTIAL STABILIZATION OF AN UNDERACTUATED AIRSHIP , 2005 .

[13]  Ernesto P. Lopes,et al.  Application of a blind person strategy for obstacle avoidance with the use of potential fields , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[14]  Sven Koenig,et al.  Improved fast replanning for robot navigation in unknown terrain , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[15]  Norihiro Goto,et al.  Identification of Blimp Dynamics via Flight Tests , 2003 .

[16]  Shigenobu Kobayashi,et al.  Efficient Non-Linear Control by Combining Q-learning with Local Linear Controllers , 1999, ICML.

[17]  Richard M. Murray,et al.  A motion planner for nonholonomic mobile robots , 1994, IEEE Trans. Robotics Autom..