α-Fairness-Maximizing User Association in Energy-Constrained Small Cell Networks

Renewable energy source (RES)-powered base stations have received tremendous research interest in recent years because they can expand network coverage without building a power grid. This paper proposes a novel user association (UA), resource allocation (RA), and dynamic power control (PC) scheme to maximize the α-fairness in RES-assisted small cell networks. The α-fairness is a general notion that flexibly adjusts the balance between the throughput, proportional fairness, and max-min fairness according to α. Nevertheless, none of the existing studies has proposed UA, RA, and PC to maximize the α-fairness due to its NP-hardness. Furthermore, fixed-policy-based PC designs cannot consider time-varying environments (e.g., energy harvesting models and wireless channels) of the RES-assisted networks. We first provide a Lagrangian duality-based algorithm to solve the UA and RA problem for a fixed PC. Next, we propose a dynamic PC scheme based on deep reinforcement learning (DRL) that chooses the best PC considering the time-varying environments. However, because the UA and RA algorithm executed in each step of the dynamic PC requires a long computation time, we aim to accelerate the computation of the UA and RA with DRL. Inspired by the Lagrangian duality, we design a DRL-based UA and RA with a low-dimensional continuous variable by relaxing the UA variable, the cardinality of which increases exponentially with the number of base stations and users. The simulation results show that the proposed scheme achieves a 100 times shorter computation time than the optimization-based schemes by computing only two neural networks. In particular, although there have been numerous studies on the proportional fairness maximization, the proposed scheme outperforms the optimization-based schemes in the throughput, proportional fairness, and max-min fairness metrics.

[1]  Hyun Jong Yang,et al.  Deep Reinforcement Learning-Based Resource Allocation and Power Control in Small Cells With Limited Information Exchange , 2020, IEEE Transactions on Vehicular Technology.

[2]  Pan Li,et al.  Channel State Information Prediction for 5G Wireless Communications: A Deep Learning Approach , 2020, IEEE Transactions on Network Science and Engineering.

[3]  Erik G. Larsson,et al.  Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems , 2019, IEEE Communications Letters.

[4]  Lenan Wu,et al.  Power Allocation in Multi-User Cellular Networks: Deep Reinforcement Learning Approaches , 2019, IEEE Transactions on Wireless Communications.

[5]  Fumiyuki Adachi,et al.  Deep-Learning-Based Millimeter-Wave Massive MIMO for Hybrid Precoding , 2019, IEEE Transactions on Vehicular Technology.

[6]  Athina P. Petropulu,et al.  A Deep Learning Framework for Optimization of MISO Downlink Beamforming , 2019, IEEE Transactions on Communications.

[7]  Xianbin Wang,et al.  Deep Learning-Based Beam Management and Interference Coordination in Dense mmWave Networks , 2019, IEEE Transactions on Vehicular Technology.

[8]  Guan Gui,et al.  Deep Learning-Inspired Message Passing Algorithm for Efficient Resource Allocation in Cognitive Radio Networks , 2019, IEEE Transactions on Vehicular Technology.

[9]  Yiyang Pei,et al.  Deep Reinforcement Learning for User Association and Resource Allocation in Heterogeneous Networks , 2018, 2018 IEEE Global Communications Conference (GLOBECOM).

[10]  Stephan ten Brink,et al.  Online Label Recovery for Deep Learning-based Communication through Error Correcting Codes , 2018, 2018 15th International Symposium on Wireless Communication Systems (ISWCS).

[11]  Geoffrey Ye Li,et al.  Deep Reinforcement Learning Based Resource Allocation for V2V Communications , 2018, IEEE Transactions on Vehicular Technology.

[12]  Stephan ten Brink,et al.  OFDM-Autoencoder for End-to-End Learning of Communications Systems , 2018, 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[13]  Geoffrey Ye Li,et al.  Power of Deep Learning for Channel Estimation and Signal Detection in OFDM Systems , 2017, IEEE Wireless Communications Letters.

[14]  Stephan ten Brink,et al.  Deep Learning Based Communication Over the Air , 2017, IEEE Journal of Selected Topics in Signal Processing.

[15]  Sanjeev Jain,et al.  Green Communication in Next Generation Cellular Networks: A Survey , 2017, IEEE Access.

[16]  Jing Wang,et al.  A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs , 2017, 2017 IEEE International Conference on Communications (ICC).

[17]  Kobi Cohen,et al.  Deep Multi-User Reinforcement Learning for Distributed Dynamic Spectrum Access , 2017, IEEE Transactions on Wireless Communications.

[18]  Sang Hyun Lee,et al.  Distributed Load Balancing via Message Passing for Heterogeneous Cellular Networks , 2016, IEEE Transactions on Vehicular Technology.

[19]  Tom Schaul,et al.  Prioritized Experience Replay , 2015, ICLR.

[20]  Jeffrey G. Andrews,et al.  User Association and Interference Management in Massive MIMO HetNets , 2015, IEEE Transactions on Communications.

[21]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[22]  Guowang Miao,et al.  Backhaul-Aware User Association and Resource Allocation for Energy-Constrained HetNets , 2015, IEEE Transactions on Vehicular Technology.

[23]  Shiwen Mao,et al.  DeepFi: Deep learning for indoor fingerprinting using channel state information , 2015, 2015 IEEE Wireless Communications and Networking Conference (WCNC).

[24]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[25]  Antonio Pascual-Iserte,et al.  User association for load balancing in heterogeneous networks powered with energy harvesting sources , 2014, 2014 IEEE Globecom Workshops (GC Wkshps).

[26]  Giuseppe Caire,et al.  Optimal User-Cell Association for Massive MIMO Wireless Networks , 2014, IEEE Transactions on Wireless Communications.

[27]  Wei Yu,et al.  Distributed Pricing-Based User Association for Downlink Heterogeneous Cellular Networks , 2014, IEEE Journal on Selected Areas in Communications.

[28]  Giuseppe Caire,et al.  User association and load balancing for cellular massive MIMO , 2014, 2014 Information Theory and Applications Workshop (ITA).

[29]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[30]  Min Sheng,et al.  Joint scheduling and association for α-fairness Network Utility Maximization in cellular networks , 2013, 2013 IEEE 24th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC).

[31]  Jeffrey G. Andrews,et al.  Fundamentals of Heterogeneous Cellular Networks with Energy Harvesting , 2013, IEEE Transactions on Wireless Communications.

[32]  Xiaodong Wang,et al.  Pricing-Based Distributed Downlink Beamforming in Multi-Cell OFDMA Networks , 2012, IEEE Journal on Selected Areas in Communications.

[33]  Jeffrey G. Andrews,et al.  User Association for Load Balancing in Heterogeneous Cellular Networks , 2012, IEEE Transactions on Wireless Communications.

[34]  Muhammad Ali Imran,et al.  How much energy is needed to run a wireless network? , 2011, IEEE Wireless Communications.

[35]  Ashutosh Sabharwal,et al.  An Axiomatic Theory of Fairness in Network Resource Allocation , 2009, 2010 Proceedings IEEE INFOCOM.

[36]  Jim Kurose,et al.  An Information-Theoretic Characterization of Weighted alpha-Proportional Fairness , 2009, IEEE INFOCOM 2009.

[37]  Yong Xia,et al.  New optimality conditions for quadratic optimization problems with binary constraints , 2009, Optim. Lett..

[38]  Gustavo de Veciana,et al.  Dynamic association for load balancing and interference avoidance in multi-cell networks , 2007, IEEE Transactions on Wireless Communications.

[39]  Long Ji Lin,et al.  Self-improving reactive agents based on reinforcement learning, planning and teaching , 1992, Machine Learning.

[40]  Atefeh Hajijamali Arani,et al.  Fairness-Aware Link Optimization for Space-Terrestrial Integrated Networks: A Reinforcement Learning Framework , 2021, IEEE Access.

[41]  for small cell enhancements for E-UTRA and E-UTRAN , 2020 .

[42]  Giuseppe Piro,et al.  HetNets Powered by Renewable Energy Sources: Sustainable Next-Generation Cellular Networks , 2013, IEEE Internet Computing.

[43]  C. Rosenberg,et al.  Joint Resource Allocation and User Association for Heterogeneous Wireless Cellular Networks , 2013, IEEE Transactions on Wireless Communications.

[44]  Stephen P. Boyd,et al.  Subgradient Methods , 2007 .