Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving

This paper addresses the power saving problem in mobile networks. Base station (BS) power and network traffic volume (NTV) models are first established. The BS power is modeled based on in-house equipment measurement by sampling different BS load configurations. The NTV model is built based on traffic data in the literature. Then, a threshold-based adaptive power saving method is discussed, serving as the benchmark. Next, a BS power control framework is created using Q-learning. The action-state function of the Q-learning is approximated via a deep convolutional neural network (DCNN). The DCNN-Q agent is designed to control the loads of cells in order to adapt to NTV variations and reduce power consumption. The DCNN-Q power saving framework is trained and simulated in a heterogeneous network including macrocells and microcells. It can be concluded that with the proposed DCNN-Q method, the power saving outperforms the threshold-based method.

[1]  Muhammad Ali Imran,et al.  Flexible power modeling of LTE base stations , 2012, 2012 IEEE Wireless Communications and Networking Conference (WCNC).

[2]  Amir Mosavi,et al.  Energy Consumption Prediction Using Machine Learning; A Review , 2019 .

[3]  Bhaskar Krishnamachari,et al.  Dynamic Base Station Switching-On/Off Strategies for Green Cellular Networks , 2013, IEEE Transactions on Wireless Communications.

[4]  James D. Gadze,et al.  P Real Time Traffic Base Station Power Consumption Model for Telcos in Ghana , 2016 .

[5]  Jeffrey G. Andrews,et al.  What Will 5G Be? , 2014, IEEE Journal on Selected Areas in Communications.

[6]  Eitan Altman,et al.  Reinforcement Learning Approach for Advanced Sleep Modes Management in 5G Networks , 2018, 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall).

[7]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[8]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[9]  Loutfi Nuaymi,et al.  Location-Aware Sleep Strategy for Energy-Delay Tradeoffs in 5G with Reinforcement Learning , 2019, 2019 IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC).

[10]  Christopher Paolini,et al.  Cell Zooming for Power Efficient Base Station Operation , 2013, 2013 9th International Wireless Communications and Mobile Computing Conference (IWCMC).

[11]  Bikramjit Banerjee,et al.  Multi-agent reinforcement learning as a rehearsal for decentralized planning , 2016, Neurocomputing.

[12]  Moshe Zukerman,et al.  Energy-Efficient Base-Stations Sleep-Mode Techniques in Green Cellular Networks: A Survey , 2015, IEEE Communications Surveys & Tutorials.

[13]  Holger Claussen,et al.  On the Fundamental Characteristics of Ultra-Dense Small Cell Networks , 2017, IEEE Network.

[14]  Wan Choi,et al.  Energy-Efficient Repulsive Cell Activation for Heterogeneous Cellular Networks , 2013, IEEE Journal on Selected Areas in Communications.

[15]  Robert W. Heath,et al.  Is the PHY layer dead? , 2011, IEEE Communications Magazine.

[16]  Sourjya Bhaumik,et al.  Breathe to stay cool: adjusting cell sizes to reduce energy consumption , 2010, Green Networking '10.

[17]  Bhaskar Krishnamachari,et al.  Base Station Operation and User Association Mechanisms for Energy-Delay Tradeoffs in Green Cellular Networks , 2011, IEEE Journal on Selected Areas in Communications.