Energy saving in heterogeneous cellular network via transfer reinforcement learning based policy

Energy efficient operation of heterogeneous networks (HetNets) has become extremely crucial owing to their fast increasing deployment. This work presents a novel approach in which an actor-critic (AC) reinforcement learning (RL) framework is used to enable traffic based ON/OFF switching of base stations (BSs) in a HetNet leading to a reduction in overall energy consumption. Further, previously estimated traffic statistics is exploited in future scenarios which speeds up the learning process and provide additional improvement in energy saving. The presented scheme leads to up to 82% drop in energy consumption which is a quite significant amount. Furthermore, the analysis of system delay and energy saving trade-off is done.

[1]  Xianfu Chen,et al.  TACT: A Transfer Actor-Critic Learning Framework for Energy Saving in Cellular Radio Access Networks , 2012, IEEE Transactions on Wireless Communications.

[2]  Quan Zhou,et al.  A novel energy aware dynamic on-off control of base stations in wireless networks , 2015, 2015 IEEE 16th International Conference on Communication Technology (ICCT).

[3]  Federico Boccardi,et al.  SLEEP mode techniques for small cell deployments , 2011, IEEE Communications Magazine.

[4]  Abhay Karandikar,et al.  Load dependent optimal ON-OFF policies in cellular heterogeneous networks , 2014, 2014 12th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt).