Deep Deterministic Policy Gradient Based Dynamic Power Control for Self-Powered Ultra-Dense Networks

By densely deploying the base stations (BSs), Ultra Dense Network (UDN) exhibits strong potential to enhance the network capacity, while leading to huge power consumption and a great deal of greenhouse emissions. To this end, power control is regraded as a promising solution to enhance energy efficiency (EE). Without prior knowledge about energy arrival, user arrival and channel state information, we propose a Deep Deterministic Policy Gradient (DDPG)-based EE optimization problem in energy harvesting UDN (EH-UDN), aiming to obtain the optimal power control scheme. The proposed DDPG-based optimization framework is evaluated by comparing with the well-known RL algorithms, i.e., Deep Q-learning Network and Q-learning. Numerical results show that the proposed DDPG-based framework is able to enhance EE significantly, and shows strong potential to deal with complicated optimization problems.

[1]  Oliver Blume,et al.  Energy savings in mobile networks based on adaptation to traffic statistics , 2010, Bell Labs Technical Journal.

[2]  Marco Miozzo,et al.  Distributed Q-learning for energy harvesting Heterogeneous Networks , 2015, 2015 IEEE International Conference on Communication Workshop (ICCW).

[3]  K. J. Ray Liu,et al.  Energy-Efficient Base-Station Cooperative Operation with Guaranteed QoS , 2013, IEEE Transactions on Communications.

[4]  Kai-Ten Feng,et al.  Joint Wireless Charging and Hybrid Power Based Resource Allocation for LTE-A Wireless Network , 2017, 2017 IEEE Wireless Communications and Networking Conference (WCNC).

[5]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[6]  Jianxin Dai,et al.  A Resource Allocation Algorithm Based on Game Theory in UDN , 2017, MLICOM.

[7]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[8]  Abolfazl Mehbodniya,et al.  Online ski rental for scheduling self-powered, energy harvesting small base stations , 2016, 2016 IEEE International Conference on Communications (ICC).

[9]  Jun Wang,et al.  Efficient Resource Allocation Algorithms for Energy Efficiency Maximization in Ultra-Dense Network , 2017, GLOBECOM 2017 - 2017 IEEE Global Communications Conference.

[10]  Tiejun Lv,et al.  Deep Q-Learning Based Dynamic Resource Allocation for Self-Powered Ultra-Dense Networks , 2018, 2018 IEEE International Conference on Communications Workshops (ICC Workshops).

[11]  Xuemin Shen,et al.  Energy-Aware Traffic Offloading for Green Heterogeneous Networks , 2016, IEEE Journal on Selected Areas in Communications.

[12]  Euhanna Ghadimi,et al.  A reinforcement learning approach to power control and rate adaptation in cellular networks , 2016, 2017 IEEE International Conference on Communications (ICC).

[13]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.