Energy Optimisation through Path Selection for Underwater Wireless Sensor Networks

This paper explores energy-efficient ways of retrieving data from underwater sensor fields using autonomous underwater vehicles (AUVs). Since AUVs are battery-powered and therefore energy-constrained, their energy consumption is a critical consideration in designing underwater wireless sensor networks. The energy consumed by an AUV depends on the hydrodynamic design, speed, on-board payload and its trajectory. In this paper, we optimise the trajectory taken by the AUV deployed from a floating ship to collect data from every cluster head in an underwater sensor network and return to the ship to offload the data. The trajectory optimisation algorithm models the trajectory selection as a stochastic shortest path problem and uses reinforcement learning to select the minimum cost path, taking into account that banked turns consume more energy than straight movement. We also investigate the impact of AUV speed on its energy consumption. The results show that our algorithm improves AUV energy consumption by up to 50% compared with the Nearest Neighbour algorithm for sparse deployments.

[1]  Shenghong Li,et al.  Reinforcement Learning Based Stochastic Shortest Path Finding in Wireless Sensor Networks , 2019, IEEE Access.

[2]  Jon Jouis Bentley,et al.  Fast Algorithms for Geometric Traveling Salesman Problems , 1992, INFORMS J. Comput..

[3]  Cailian Chen,et al.  Energy-Efficient Data Collection Over AUV-Assisted Underwater Acoustic Sensor Network , 2018, IEEE Systems Journal.

[4]  Jong Hyuk Park,et al.  An improved ant colony optimization-based approach with mobile sink for wireless sensor networks , 2017, The Journal of Supercomputing.

[5]  Jai-Hoon Kim,et al.  Efficient Nearest Neighbor Heuristic TSP Algorithms for Reducing Data Acquisition Latency of UAV Relay WSN , 2017, Wirel. Pers. Commun..

[6]  Thomas Stützle,et al.  Ant Colony Optimization: Overview and Recent Advances , 2018, Handbook of Metaheuristics.

[7]  M. A. Imran,et al.  Energy Minimization UAV Trajectory Design for Delay-Tolerant Emergency Communication , 2019, 2019 IEEE International Conference on Communications Workshops (ICC Workshops).

[8]  Artur Wolek,et al.  Optimal Paths in Gliding Flight , 2015 .

[9]  Artur Wolek,et al.  Model-Based Path Planning , 2017 .

[10]  Zhenghua Chen,et al.  Using Reinforcement Learning to Minimize the Probability of Delay Occurrence in Transportation , 2020, IEEE Transactions on Vehicular Technology.

[11]  Steven Skiena,et al.  The Algorithm Design Manual , 2020, Texts in Computer Science.

[12]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[13]  Mohsen Guizani,et al.  Prediction-Based Delay Optimization Data Collection Algorithm for Underwater Acoustic Sensor Networks , 2019, IEEE Transactions on Vehicular Technology.

[14]  C. C. Eriksen,et al.  Seaglider: a long-range autonomous underwater vehicle for oceanographic research , 2001 .

[15]  Walid Saad,et al.  Deep Reinforcement Learning for Interference-Aware Path Planning of Cellular-Connected UAVs , 2018, 2018 IEEE International Conference on Communications (ICC).

[16]  James G. Bellingham,et al.  Platforms: Autonomous Underwater Vehicles , 2008 .

[17]  Ulrik Jørgensen,et al.  AUV Guidance System for Dynamic Trajectory Generation , 2012 .

[18]  Geoffrey A. Hollinger,et al.  Underwater Data Collection Using Robotic Sensor Networks , 2012, IEEE Journal on Selected Areas in Communications.