Mode Selection and Resource Allocation in Sliced Fog Radio Access Networks: A Reinforcement Learning Approach

The mode selection and resource allocation in fog radio access networks (F-RANs) have been advocated as key techniques to improve spectral and energy efficiency. In this paper, we investigate the joint optimization of mode selection and resource allocation in uplink F-RANs, where both of the traditional user equipments (UEs) and fog UEs are served by constructed network slice instances. The concerned optimization is formulated as a mixed-integer programming problem, and both the orthogonal and multiplexed subchannel allocation strategies are proposed to guarantee the slice isolation. Motivated by the development of machine learning, two reinforcement learning based algorithms are developed to solve the original high complexity problem under traditional and fog UEs’ specific performance requirements. The basic idea of the proposals is to generate a good mode selection policy according to the immediate reward fed back by an environment. Simulation results validate the benefits of our proposed algorithms and show that a tradeoff between system power consumption and queue delay can be achieved.

[1]  Mugen Peng,et al.  Hierarchical Radio Resource Allocation for Network Slicing in Fog Radio Access Networks , 2019, IEEE Transactions on Vehicular Technology.

[2]  Geoffrey Ye Li,et al.  Deep Reinforcement Learning Based Resource Allocation for V2V Communications , 2018, IEEE Transactions on Vehicular Technology.

[3]  John M. Cioffi,et al.  Weighted Sum-Rate Maximization Using Weighted MMSE for MIMO-BC Beamforming Design , 2008, 2009 IEEE International Conference on Communications.

[4]  Tarik Taleb,et al.  Network Slicing and Softwarization: A Survey on Principles, Enabling Technologies, and Solutions , 2018, IEEE Communications Surveys & Tutorials.

[5]  Guan Gui,et al.  Deep Learning-Inspired Message Passing Algorithm for Efficient Resource Allocation in Cognitive Radio Networks , 2019, IEEE Transactions on Vehicular Technology.

[6]  Zhu Han,et al.  Self-Organization in Small Cell Networks: A Reinforcement Learning Approach , 2013, IEEE Transactions on Wireless Communications.

[7]  Nei Kato,et al.  Routing or Computing? The Paradigm Shift Towards Intelligent Computer Network Packet Transmission Based on Deep Learning , 2017, IEEE Transactions on Computers.

[8]  Tony Q. S. Quek,et al.  Service Multiplexing and Revenue Maximization in Sliced C-RAN Incorporated With URLLC and Multicast eMBB , 2019, IEEE Journal on Selected Areas in Communications.

[9]  H. Vincent Poor,et al.  A Distributed Approach to Improving Spectral Efficiency in Uplink Device-to-Device-Enabled Cloud Radio Access Networks , 2018, IEEE Transactions on Communications.

[10]  Mugen Peng,et al.  Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues , 2018, IEEE Communications Surveys & Tutorials.

[11]  Xianfu Chen,et al.  Energy-Efficiency Oriented Traffic Offloading in Wireless Networks: A Brief Survey and a Learning Approach for Heterogeneous Cellular Networks , 2015, IEEE Journal on Selected Areas in Communications.

[12]  Yan Chen,et al.  Intelligent 5G: When Cellular Networks Meet Artificial Intelligence , 2017, IEEE Wireless Communications.

[13]  Saeedeh Parsaeefard,et al.  Joint User-Association and Resource-Allocation in Virtualized Wireless Networks , 2015, IEEE Access.

[14]  Weihua Zhuang,et al.  Dynamic Radio Resource Slicing for a Two-Tier Heterogeneous Wireless Network , 2018, IEEE Transactions on Vehicular Technology.

[15]  Mugen Peng,et al.  Fog-computing-based radio access networks: issues and challenges , 2015, IEEE Network.

[16]  Nei Kato,et al.  State-of-the-Art Deep Learning: Evolving Machine Intelligence Toward Tomorrow’s Intelligent Network Traffic Control Systems , 2017, IEEE Communications Surveys & Tutorials.

[17]  Yue Wang,et al.  Joint Caching Placement and User Association for Minimizing User Download Delay , 2016, IEEE Access.

[18]  Tapani Ristaniemi,et al.  Multiobjective Optimization for Computation Offloading in Fog Computing , 2018, IEEE Internet of Things Journal.

[19]  Mugen Peng,et al.  Cost-Aware Resource Allocation for Optimization of Energy Efficiency in Fog Radio Access Networks , 2018, IEEE Journal on Selected Areas in Communications.

[20]  Eryk Dutkiewicz,et al.  Optimal and Fast Real-Time Resource Slicing With Deep Dueling Neural Networks , 2019, IEEE Journal on Selected Areas in Communications.

[21]  Lingyang Song,et al.  How Much Computing Capability Is Enough to Run a Cloud Radio Access Network? , 2017, IEEE Communications Letters.

[22]  Jia Shi,et al.  A Model-Driven Deep Reinforcement Learning Heuristic Algorithm for Resource Allocation in Ultra-Dense Cellular Networks , 2020, IEEE Transactions on Vehicular Technology.

[23]  John M. Cioffi,et al.  Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design , 2008, IEEE Trans. Wirel. Commun..

[24]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[25]  Mehdi Bennis,et al.  Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning , 2018, IEEE Internet of Things Journal.

[26]  Alfredo García,et al.  On the Feasibility of 5G Slice Resource Allocation With Spectral Efficiency: A Probabilistic Characterization , 2019, IEEE Access.

[27]  Nei Kato,et al.  The Deep Learning Vision for Heterogeneous Network Traffic Control: Proposal, Challenges, and Future Perspective , 2017, IEEE Wireless Communications.