Aerial Base Station Positioning and Power Control for Securing Communications: A Deep Q-Network Approach

The unmanned aerial vehicle (UAV) is one of the technological breakthroughs that supports a variety of services, including communications. UAV will play a critical role in enhancing the physical layer security of wireless networks. This paper defines the problem of eavesdropping on the link between the ground user and the UAV, which serves as an aerial base station (ABS). The reinforcement learning algorithms Q-learning and deep Q-network (DQN) are proposed for optimizing the position of the ABS and the transmission power to enhance the data rate of the ground user. This increases the secrecy capacity without the system knowing the location of the eavesdropper. Simulation results show fast convergence and the highest secrecy capacity of the proposed DQN compared to Q-learning and baseline approaches.

[1]  V. Marojevic,et al.  Performance Evaluation of Aerial Relaying Systems for Improving Secrecy in Cellular Networks , 2020, 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall).

[2]  Milad Tatar Mamaghani,et al.  Intelligent Trajectory Design for Secure Full- Duplex MIMO-UAV Relaying Against Active Eavesdroppers: A Model-Free Reinforcement Learning Approach , 2021, IEEE Access.

[3]  Vuk Marojevic,et al.  Securing Mobile IoT with Unmanned Aerial Systems , 2020, 2020 IEEE 6th World Forum on Internet of Things (WF-IoT).

[4]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[5]  Lifeng Wang,et al.  Safeguarding 5G wireless communication networks using physical layer security , 2015, IEEE Communications Magazine.

[6]  Xingqin Lin,et al.  The Sky Is Not the Limit: LTE for Unmanned Aerial Vehicles , 2017, IEEE Communications Magazine.

[7]  Zhu Han,et al.  Optimal Placement of Low-Altitude Aerial Base Station for Securing Communications , 2019, IEEE Wireless Communications Letters.

[8]  Zhu Han,et al.  UAV-Enabled Secure Communications by Multi-Agent Deep Reinforcement Learning , 2020, IEEE Transactions on Vehicular Technology.

[9]  Joonhyuk Kang,et al.  Secrecy-Aware Altitude Optimization for Quasi-Static UAV Base Station Without Eavesdropper Location Information , 2019, IEEE Communications Letters.

[10]  Vuk Marojevic,et al.  Securing Mobile Multiuser Transmissions With UAVs in the Presence of Multiple Eavesdroppers , 2020, IEEE Transactions on Vehicular Technology.

[11]  Vuk Marojevic,et al.  Spectrum Sharing for UAV Communications: Spatial Spectrum Sensing and Open Issues , 2020, IEEE Vehicular Technology Magazine.

[12]  Nicola Marchetti,et al.  Mobility in the Sky: Performance and Mobility Analysis for Cellular-Connected UAVs , 2019, IEEE Transactions on Communications.

[13]  Marc Peter Deisenroth,et al.  Deep Reinforcement Learning: A Brief Survey , 2017, IEEE Signal Processing Magazine.

[14]  Vuk Marojevic,et al.  Communications Standards for Unmanned Aircraft Systems: The 3GPP Perspective and Research Drivers , 2020 .

[15]  Tiejun Lv,et al.  Deep reinforcement learning based computation offloading and resource allocation for MEC , 2018, 2018 IEEE Wireless Communications and Networking Conference (WCNC).

[16]  Rudra Dutta,et al.  Aerial Experimentation and Research Platform for Mobile Communications and Computing , 2019, 2019 IEEE Globecom Workshops (GC Wkshps).