Energy-Efficient Resource Allocation with Dynamic Cache Using Reinforcement Learning

With the increasing amount of data in wireless network, the problem of energy consumption becomes more serious due to energy requirement for massive data transmission. This paper proposes an energy-efficient resource allocation algorithm with dynamic cache, which can adjust the caching strategy dynamically according to the channel state to reduce energy consumption under the constraint of smooth video streaming. The mathematical models of energy consumption for video transmission and decision selection are established, respectively. Given the dynamic channel environment, an on-line algorithm using reinforcement learning is proposed. In order to reduce the overall energy consumption of the system, and maintain the balance of energy consumption between transmission and calculation, the model of the off-line part is trained by using the neural network, and the calculation accuracy is adjusted adaptively. The simulation results show that the proposed algorithm can improve the total energy efficiency of the system effectively.

[1]  Kwang-Cheng Chen,et al.  Iterative tracking the minimum of overall energy consumption in OFDMA systems , 2011, 2011 IEEE 22nd International Symposium on Personal, Indoor and Mobile Radio Communications.

[2]  F.L. Lewis,et al.  Reinforcement learning and adaptive dynamic programming for feedback control , 2009, IEEE Circuits and Systems Magazine.

[3]  Dong Liu,et al.  Energy Efficiency of Downlink Networks With Caching at Base Stations , 2015, IEEE Journal on Selected Areas in Communications.

[4]  Mukta Paliwal,et al.  Neural networks and statistical techniques: A review of applications , 2009, Expert Syst. Appl..

[5]  Rakesh Kumar Jha,et al.  Power Optimization in 5G Networks: A Step Towards GrEEn Communication , 2016, IEEE Access.

[6]  Harish Kumar,et al.  Energy efficient wireless mobile networks: A review , 2014, 2014 International Conference on Reliability Optimization and Information Technology (ICROIT).

[7]  Mugen Peng,et al.  Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues , 2018, IEEE Communications Surveys & Tutorials.

[8]  Juan Sanchez-Gonzalez,et al.  Power-Efficient Resource Allocation in a Heterogeneous Network With Cellular and D2D Capabilities , 2016, IEEE Transactions on Vehicular Technology.

[9]  Geoffrey Ye Li,et al.  Fundamental Green Tradeoffs: Progresses, Challenges, and Impacts on 5G Networks , 2016, IEEE Communications Surveys & Tutorials.

[10]  H. Hallen,et al.  Long Range Prediction of Fading Signals : Enabling Adaptive Transmission for Mobile Radio Channels , 2000 .

[11]  Petros Koumoutsakos,et al.  Local Meta-models for Optimization Using Evolution Strategies , 2006, PPSN.

[12]  Mugen Peng,et al.  Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks , 2018, IEEE Internet of Things Journal.

[13]  Renchao Xie,et al.  Energy-efficient hierarchical cooperative caching optimisation for 5G networks , 2019, IET Commun..

[14]  Huaguang Zhang,et al.  Data-Driven Optimal Consensus Control for Discrete-Time Multi-Agent Systems With Unknown Dynamics Using Reinforcement Learning Method , 2017, IEEE Transactions on Industrial Electronics.

[15]  Walid Saad,et al.  Caching in the Sky: Proactive Deployment of Cache-Enabled Unmanned Aerial Vehicles for Optimized Quality-of-Experience , 2016, IEEE Journal on Selected Areas in Communications.