Programmable and Customized Intelligence for Traffic Steering in 5G Networks Using Open RAN Architectures

5G and beyond mobile networks will support heterogeneous use cases at an unprecedented scale, thus demanding automated control and optimization of network functionalities customized to the needs of individual users. Such fine-grained control of the Radio Access Network (RAN) is not possible with the current cellular architecture. To fill this gap, the Open RAN paradigm and its specification introduce an open architecture with abstractions that enable closed-loop control and provide data-driven, and intelligent optimization of the RAN at the user level. This is obtained through custom RAN control applications (i.e., xApps) deployed on near-real-time RAN Intelligent Controller (near-RT RIC) at the edge of the network. Despite these premises, as of today the research community lacks a sandbox to build data-driven xApps, and create large-scale datasets for effective AI training. In this paper, we address this by introducing ns-O-RAN, a software framework that integrates a real-world, production-grade near-RT RIC with a 3GPP-based simulated environment on ns-3, enabling the development of xApps and automated large-scale data collection and testing of Deep Reinforcement Learning-driven control policies for the optimization at the user-level. In addition, we propose the first user-specific O-RAN Traffic Steering (TS) intelligent handover framework. It uses Random Ensemble Mixture, combined with a state-of-the-art Convolutional Neural Network architecture, to optimally assign a serving base station to each user in the network. Our TS xApp, trained with more than 40 million data points collected by ns-O-RAN, runs on the near-RT RIC and controls its base stations. We evaluate the performance on a large-scale deployment, showing that the xApp-based handover improves throughput and spectral efficiency by an average of 50% over traditional handover heuristics, with less mobility overhead.

[1]  P. Trakadas,et al.  Towards Closed-loop Automation in 5G Open RAN: Coupling an Open-Source Simulator with xApps , 2022, 2022 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit).

[2]  S. Basagni,et al.  OpenRAN Gym: An Open Toolbox for Data Collection and Experimentation with AI in O-RAN , 2022, 2022 IEEE Wireless Communications and Networking Conference (WCNC).

[3]  S. Basagni,et al.  Understanding O-RAN: Architecture, Interfaces, Algorithms, Security, and Research Challenges , 2022, IEEE Communications Surveys & Tutorials.

[4]  Jawad Tanveer,et al.  An Overview of Reinforcement Learning Algorithms for Handover Management in 5G Ultra-Dense Small Cell Networks , 2022, Applied Sciences.

[5]  Stefano Basagni,et al.  ColO-RAN: Developing Machine Learning-Based xApps for Open RAN Closed-Loop Control on Programmable Experimental Platforms , 2021, IEEE Transactions on Mobile Computing.

[6]  Marcin Dryjański,et al.  Toward Modular and Flexible Open RAN Implementations in 6G Networks: Traffic Steering Use Case and O-RAN xApps , 2021, Sensors.

[7]  Pratheek S. Upadhyaya,et al.  Toward Next Generation Open Radio Access Networks: What O-RAN Can and Cannot Do! , 2021, IEEE Network.

[8]  Melike Erol-Kantarci,et al.  Dynamic CU-DU Selection for Resource Allocation in O-RAN Using Actor-Critic Learning , 2021, 2021 IEEE Global Communications Conference (GLOBECOM).

[9]  Tuyen X. Tran,et al.  RIC: A RAN Intelligent Controller Platform for AI-Enabled Cellular Networks , 2021, IEEE Internet Computing.

[10]  Sun Wei,et al.  Intelligent Handover Triggering Mechanism in 5G Ultra-Dense Networks Via Clustering-Based Reinforcement Learning , 2021, Mobile Networks and Applications.

[11]  Stefano Basagni,et al.  Intelligence and Learning in O-RAN for Data-Driven NextG Cellular Networks , 2020, IEEE Communications Magazine.

[12]  Ying-Chang Liang,et al.  Joint Optimization of Handover Control and Power Allocation Based on Multi-Agent Deep Reinforcement Learning , 2020, IEEE Transactions on Vehicular Technology.

[13]  S. Levine,et al.  Conservative Q-Learning for Offline Reinforcement Learning , 2020, NeurIPS.

[14]  Xiaojun Hei,et al.  ns3-ai: Fostering Artificial Intelligence Algorithms for Networking Research , 2020, WNS3.

[15]  Metin Öztürk,et al.  Intelligent handover decision scheme using double deep reinforcement learning , 2020, Phys. Commun..

[16]  S. Levine,et al.  Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems , 2020, ArXiv.

[17]  Antonio Clemente,et al.  Multi-Agent Deep Reinforcement Learning For Distributed Handover Management In Dense MmWave Networks , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[18]  Michele Zorzi,et al.  Implementation of a Spatial Channel Model for ns-3 , 2020, WNS3.

[19]  Anatolij Zubow,et al.  ns-3 meets OpenAI Gym: The Playground for Machine Learning in Networking Research , 2019, MSWiM.

[20]  H. Tullberg,et al.  When Machine Learning Meets Wireless Cellular Networks: Deployment, Challenges, and Applications , 2019, IEEE Communications Magazine.

[21]  Riku Jäntti,et al.  A Survey on Handover Management: From LTE to NR , 2019, IEEE Access.

[22]  Rishabh Agarwal,et al.  An Optimistic Perspective on Offline Reinforcement Learning , 2019, ICML.

[23]  Wei Chen,et al.  The Roadmap to 6G: AI Empowered Wireless Networks , 2019, IEEE Communications Magazine.

[24]  László Hévizi,et al.  5G Handover using Reinforcement Learning , 2019, 2020 IEEE 3rd 5G World Forum (5GWF).

[25]  Li Wang,et al.  Learning Radio Resource Management in RANs: Framework, Opportunities, and Challenges , 2018, IEEE Communications Magazine.

[26]  Marco Pavone,et al.  Cellular Network Traffic Scheduling With Deep Reinforcement Learning , 2018, AAAI.

[27]  Masahiro Morikura,et al.  Reinforcement learning based predictive handover for pedestrian-aware mmWave networks , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[28]  Bhaskar Krishnamachari,et al.  Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks , 2018, IEEE Transactions on Cognitive Communications and Networking.

[29]  Shuguang Cui,et al.  Handover Control in Wireless Systems via Asynchronous Multiuser Deep Reinforcement Learning , 2018, IEEE Internet of Things Journal.

[30]  Sundeep Rangan,et al.  End-to-End Simulation of 5G mmWave Networks , 2017, IEEE Communications Surveys & Tutorials.

[31]  Sundeep Rangan,et al.  Improved Handover Through Dual Connectivity in 5G mmWave Mobile Networks , 2016, IEEE Journal on Selected Areas in Communications.

[32]  Raquel Barco,et al.  Load balancing and handover joint optimization in LTE networks using Fuzzy Logic and Reinforcement Learning , 2015, Comput. Networks.

[33]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[34]  Andreas Mitschele-Thiel,et al.  Minimizing Handover Performance Degradation Due to LTE Self Organized Mobility Load Balancing , 2013, 2013 IEEE 77th Vehicular Technology Conference (VTC Spring).

[35]  Klaus Wehrle,et al.  Modeling and Tools for Network Simulation , 2010, Modeling and Tools for Network Simulation.

[36]  George F. Riley,et al.  The ns-3 Network Simulator , 2010, Modeling and Tools for Network Simulation.