On Improving Energy Efficiency within Green Femtocell Networks: A Hierarchical Reinforcement Learning Approach

One of the efficient solutions of improving coverage and increasing capacity in cellular networks is the deployment of femtocells. As the cellular networks are becoming more complex, energy consumption of whole network infrastructure is becoming important in terms of both operational costs and environmental impacts. This paper investigates energy efficiency of two-tier femtocell networks through combining game theory and stochastic learning. With the Stackelberg game formulation, a hierarchical reinforcement learning framework is applied for studying the joint expected utility maximization of macrocells and femtocells subject to the minimum signal-to-interference-plus-noise-ratio requirements. In the learning procedure, the macrocells act as leaders and the femtocells are followers. At each time step, the leaders commit to dynamic strategies based on the best responses of the followers, while the followers compete against each other with no further information but the leaders' transmission parameters. In this paper, we propose two reinforcement learning based intelligent algorithms to schedule each cell's stochastic power levels. Numerical experiments are presented to validate the investigations. The results show that the two learning algorithms substantially improve the energy efficiency of the femtocell networks.

[1]  Sudarshan Guruacharya,et al.  Hierarchical Competition in Femtocell-Based Cellular Networks , 2010, 2010 IEEE Global Telecommunications Conference GLOBECOM 2010.

[2]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[3]  Jeffrey G. Andrews,et al.  Femtocell networks: a survey , 2008, IEEE Communications Magazine.

[4]  F. Richard Yu,et al.  Energy-efficient spectrum sharing and power allocation in cognitive radio femtocell networks , 2012, 2012 Proceedings IEEE INFOCOM.

[5]  Yevgeniy Vorobeychik,et al.  Computing Stackelberg Equilibria in Discounted Stochastic Games , 2012, AAAI.

[6]  Rachid El Azouzi,et al.  Introducing hierarchy in energy games , 2009, IEEE Transactions on Wireless Communications.

[7]  Ana Galindo-Serrano,et al.  From cognition to docition: The teaching radio paradigm for distributed & autonomous deployments , 2010, Comput. Commun..

[8]  Andrea J. Goldsmith,et al.  Energy-efficiency of MIMO and cooperative MIMO techniques in sensor networks , 2004, IEEE Journal on Selected Areas in Communications.

[9]  Yezekael Hayel,et al.  Stackelberg games for energy-efficient power control in wireless networks , 2011, 2011 Proceedings IEEE INFOCOM.

[10]  Yang Yang,et al.  Network energy saving technologies for green wireless access networks , 2011, IEEE Wireless Communications.

[11]  Jeffrey G. Andrews,et al.  Power control in two-tier femtocell networks , 2008, IEEE Transactions on Wireless Communications.

[12]  Youngju Kim,et al.  Beam Subset Selection Strategy for Interference Reduction in Two-Tier Femtocell Networks , 2010, IEEE Transactions on Wireless Communications.

[13]  Honggang Zhang,et al.  Green communications: Theoretical fundamentals, algorithms, and applications , 2012 .

[14]  Cem U. Saraydar,et al.  Efficient power control via pricing in wireless data networks , 2002, IEEE Trans. Commun..

[15]  Csaba Szepesvári,et al.  A Unified Analysis of Value-Function-Based Reinforcement-Learning Algorithms , 1999, Neural Computation.

[16]  M. Tidball,et al.  Adapting behaviors through a learning process , 2006 .

[17]  Mehdi Bennis,et al.  Distributed Learning Strategies for Interference Mitigation in Femtocell Networks , 2011, 2011 IEEE Global Telecommunications Conference - GLOBECOM 2011.

[18]  Adam Wolisz,et al.  Optimal power masking in soft frequency reuse based OFDMA networks , 2009, 2009 European Wireless Conference.

[19]  Bo Li,et al.  Non-cooperative power control for wireless ad hoc networks with repeated games , 2007, IEEE Journal on Selected Areas in Communications.

[20]  Mehul Motani,et al.  Price-Based Resource Allocation for Spectrum-Sharing Femtocell Networks: A Stackelberg Game Approach , 2012, 2011 IEEE Global Telecommunications Conference - GLOBECOM 2011.

[21]  Ryszard Kowalczyk,et al.  Dynamic analysis of multiagent Q-learning with ε-greedy exploration , 2009, ICML '09.

[22]  Holger Claussen,et al.  Improving Energy Efficiency of Femtocell Base Stations Via User Activity Detection , 2010, 2010 IEEE Wireless Communication and Networking Conference.

[23]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[24]  Cong Xiong,et al.  Energy- and Spectral-Efficiency Tradeoff in Downlink OFDMA Networks , 2011, IEEE Transactions on Wireless Communications.

[25]  Matti Latva-aho,et al.  Learning based mechanisms for interference mitigation in self-organized femtocell networks , 2010, 2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers.

[26]  Aram Galstyan,et al.  Dynamics of Boltzmann Q learning in two-player two-action games. , 2011, Physical review. E, Statistical, nonlinear, and soft matter physics.

[27]  Jong-Gwan Yook,et al.  Interference mitigation using uplink power control for two-tier femtocell networks , 2009, IEEE Transactions on Wireless Communications.