When QoE meets learning: A distributed traffic-processing framework for elastic resource provisioning in HetNets

Abstract In heterogeneous networks (HetNets), dynamics of user requirements across the temporal domain lead to the order of magnitude traffic to be processed by macro cells. To achieve high quality of experience (QoE) for users and to perform resource allocation for cells intelligently, we first propose a distributed traffic-processing framework (SDVTS) for elastic resource partitioning, to accommodate dynamics from the user-centric and resource-oriented perspectives respectively. Assisted by a software defined infrastructure, SDVTS fulfills the responsibilities of the request-based and push-based services in an interactive loop. Second, we formulate a traffic-processing time model that computes the delay of handling traffic. The non-convex model is decomposed and a dual evolution algorithm is explored to approximate the optimal solution. Furthermore, we introduce a low-complexity reinforcement learning algorithm with the personalized QoE profiling. A distributed algorithm in coalition between user and cell is designed for seamless connection of an advanced reinforcement learning system (ARLS) components and engines embedded in SDVTS. Extensive simulation results with thorough analysis demonstrate that our framework SDVTS dominates in terms of QoE and cell′s system performance when compared with competing approaches.

[1]  Zhu Han,et al.  Spectrum Allocation and Power Control for Non-Orthogonal Multiple Access in HetNets , 2017, IEEE Transactions on Wireless Communications.

[2]  Laura Galluccio,et al.  A Game Theoretic Approach for Distributed Resource Allocation and Orchestration of Softwarized Networks , 2017, IEEE Journal on Selected Areas in Communications.

[3]  Basem Shihada,et al.  Energy-Efficient Power Allocation in Multitier 5G Networks Using Enhanced Online Learning , 2017, IEEE Transactions on Vehicular Technology.

[4]  Walid Saad,et al.  Offloading in HetNet: A Coordination of Interference Mitigation, User Association, and Resource Allocation , 2017, IEEE Transactions on Mobile Computing.

[5]  Nan Zhao,et al.  Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning Approach , 2018, IEEE Transactions on Vehicular Technology.

[6]  Zhu Han,et al.  Resource Allocation in Wireless Powered Relay Networks: A Bargaining Game Approach , 2016, IEEE Transactions on Vehicular Technology.

[7]  Ness B. Shroff,et al.  Non-convex optimization and rate control for multi-class services in the Internet , 2005, IEEE/ACM Transactions on Networking.

[8]  Zhu Han,et al.  User Scheduling and Resource Allocation in HetNets With Hybrid Energy Supply: An Actor-Critic Reinforcement Learning Approach , 2018, IEEE Transactions on Wireless Communications.

[9]  Robert J. Wood,et al.  Untethered soft robotics , 2018 .

[10]  Jacobus E. van der Merwe,et al.  Proteus: a network service control platform for service evolution in a mobile software defined infrastructure , 2016, MobiCom.

[11]  Andrew G. Barto,et al.  Reinforcement learning , 1998 .

[12]  Marco Conti,et al.  Data Offloading Techniques in Cellular Networks: A Survey , 2015, IEEE Communications Surveys & Tutorials.

[13]  Vyas Sekar,et al.  A High Performance Packet Core for Next Generation Cellular Networks , 2017, SIGCOMM.

[14]  Tiankui Zhang,et al.  Resource Allocation in Energy-Cooperation Enabled Two-Tier NOMA HetNets Toward Green 5G , 2017, IEEE Journal on Selected Areas in Communications.

[15]  Bing Zhou,et al.  DMGR: a Multipath Geographic Routing Strategy with the On-demand Mobile Sink in WSN , 2017, Ad Hoc Sens. Wirel. Networks.

[16]  Chi Harold Liu,et al.  Experience-driven Networking: A Deep Reinforcement Learning based Approach , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.

[17]  Shalabh Bhatnagar,et al.  A constrained optimization perspective on actor-critic algorithms and application to network routing , 2016, Syst. Control. Lett..

[18]  Jing Wang,et al.  Green 5G Heterogeneous Networks Through Dynamic Small-Cell Operation , 2016, IEEE Journal on Selected Areas in Communications.

[19]  Xin Liu,et al.  From Prediction to Action: Improving User Experience With Data-Driven Resource Allocation , 2017, IEEE Journal on Selected Areas in Communications.

[20]  Zhu Han,et al.  Game Theory in Wireless and Communication Networks: Theory, Models, and Applications , 2011 .

[21]  Christian Bachmann,et al.  Low-power wireless sensor nodes for ubiquitous long-term biomedical signal monitoring , 2012, IEEE Communications Magazine.

[22]  Chee Wei Tan,et al.  Max-Min Fairness Rate Control in Wireless Networks: Optimality and Algorithms by Perron-Frobenius Theory , 2018, IEEE Transactions on Mobile Computing.

[23]  Hao Yu,et al.  Learning Aided Optimization for Energy Harvesting Devices with Outdated State Information , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.

[24]  Tao Wang,et al.  The Tick Programmable Low-Latency SDR System , 2018, GETMBL.

[25]  Zhu Han,et al.  Computing Resource Allocation in Three-Tier IoT Fog Networks: A Joint Optimization Approach Combining Stackelberg Game and Matching , 2017, IEEE Internet of Things Journal.

[26]  Zhu Han,et al.  A Hierarchical Game Framework for Resource Management in Fog Computing , 2017, IEEE Communications Magazine.

[27]  Victor C. M. Leung,et al.  Sensing Time Optimization and Power Control for Energy Efficient Cognitive Small Cell With Imperfect Hybrid Spectrum Sensing , 2017, IEEE Transactions on Wireless Communications.

[28]  Min Chen,et al.  Data-Driven Computing and Caching in 5G Networks: Architecture and Delay Analysis , 2018, IEEE Wireless Communications.

[29]  Zdenek Becvar,et al.  Mobile Edge Computing: A Survey on Architecture and Computation Offloading , 2017, IEEE Communications Surveys & Tutorials.

[30]  Yan Zhang,et al.  Software Defined Machine-to-Machine Communication for Smart Energy Management , 2017, IEEE Communications Magazine.

[31]  Yuefeng Ji,et al.  C-RoFN: multi-stratum resources optimization for cloud-based radio over optical fiber networks , 2016, IEEE Communications Magazine.

[32]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[33]  Jean C. Walrand,et al.  Human-in-the-Loop Mobile Networks: A Survey of Recent Advancements , 2017, IEEE Journal on Selected Areas in Communications.

[34]  Gunes Karabulut Kurt,et al.  Finite-State Markov Channel Based Modeling of RF Energy Harvesting Systems , 2017, IEEE Transactions on Vehicular Technology.

[35]  Jie Zhang,et al.  Multi-dimensional resources allocation based on reconfigurable radio-wavelength selective switch in cloud radio over fiber networks. , 2018, Optics express.

[36]  Qing Ling,et al.  An Online Convex Optimization Approach to Proactive Network Resource Allocation , 2017, IEEE Transactions on Signal Processing.

[37]  Kun Qiu,et al.  Closed-form solutions for nonlinear Shannon limit due to Kerr effect in optical fibre transmissions with digital backpropagation , 2019, Optics Communications.

[38]  Zhu Han,et al.  Taking Drones to the Next Level: Cooperative Distributed Unmanned-Aerial-Vehicular Networks for Small and Mini Drones , 2017, IEEE Vehicular Technology Magazine.

[39]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[40]  Min Chen,et al.  Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network , 2018, IEEE Journal on Selected Areas in Communications.

[41]  Rong Su,et al.  Group Greedy Method for Sensor Placement , 2019, IEEE Transactions on Signal Processing.