EXPLORA: AI/ML EXPLainability for the Open RAN

The Open Radio Access Network (RAN) paradigm is transforming cellular networks into a system of disaggregated, virtualized, and software-based components. These self-optimize the network through programmable, closed-loop control, leveraging Artificial Intelligence (AI) and Machine Learning (ML) routines. In this context, Deep Reinforcement Learning (DRL) has shown great potential in addressing complex resource allocation problems. However, DRL-based solutions are inherently hard to explain, which hinders their deployment and use in practice. In this paper, we propose EXPLORA, a framework that provides explainability of DRL-based control solutions for the Open RAN ecosystem. EXPLORA synthesizes network-oriented explanations based on an attributed graph that produces a link between the actions taken by a DRL agent (i.e., the nodes of the graph) and the input state space (i.e., the attributes of each node). This novel approach allows EXPLORA to explain models by providing information on the wireless context in which the DRL agent operates. EXPLORA is also designed to be lightweight for real-time operation. We prototype EXPLORA and test it experimentally on an O-RAN-compliant near-real-time RIC deployed on the Colosseum wireless network emulator. We evaluate EXPLORA for agents trained for different purposes and showcase how it generates clear network-oriented explanations. We also show how explanations can be used to perform informative and targeted intent-based action steering and achieve median transmission bitrate improvements of 4% and tail improvements of 10%.

[1]  A. Mahimkar,et al.  Chroma: Learning and Using Network Contexts to Reinforce Performance Improving Configurations , 2023, MobiCom.

[2]  Jingpu Duan,et al.  TapFinger: Task Placement and Fine-Grained Resource Allocation for Edge Machine Learning , 2023, IEEE INFOCOM 2023 - IEEE Conference on Computer Communications.

[3]  Fei Wang,et al.  More than Enough is Too Much: Adaptive Defenses against Gradient Leakage in Production Federated Learning , 2023, IEEE INFOCOM 2023 - IEEE Conference on Computer Communications.

[4]  M. Fiore,et al.  Spotting Deep Neural Network Vulnerabilities in Mobile Traffic Forecasting with an Explainable AI Lens , 2023, IEEE INFOCOM 2023 - IEEE Conference on Computer Communications.

[5]  W. Willinger,et al.  AI/ML for Network Security: The Emperor has no Clothes , 2022, CCS.

[6]  Pedro Enrique Iturria Rivera,et al.  Multi-Agent Team Learning in Virtualized Open Radio Access Networks (O-RAN) , 2022, Sensors.

[7]  M. Fiore,et al.  Toward native explainable and robust AI in 6G networks: Current state, challenges and road ahead , 2022, Comput. Commun..

[8]  Qingling Zhao,et al.  LRP-based Policy Pruning and Distillation of Reinforcement Learning Agents for Embedded Systems , 2022, 2022 IEEE 25th International Symposium On Real-Time Distributed Computing (ISORC).

[9]  Tianhang Zheng,et al.  Poisoning Attacks on Deep Learning based Wireless Traffic Prediction , 2022, IEEE INFOCOM 2022 - IEEE Conference on Computer Communications.

[10]  Pablo J. Rojo Maroni,et al.  Automated identification of network anomalies and their causes with interpretable machine learning: The CIAN methodology and TTrees implementation , 2022, Comput. Commun..

[11]  Elvina Gindullina,et al.  Shapley Value as an Aid to Biomedical Machine Learning: a Heart Disease Dataset Analysis , 2022, 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid).

[12]  D. Gao,et al.  Explainable AI in Deep Reinforcement Learning Models for Power System Emergency Control , 2022, IEEE Transactions on Computational Social Systems.

[13]  T. Lipić,et al.  Explainability in reinforcement learning: perspective and position , 2022, ArXiv.

[14]  E. Letouzé,et al.  Trust, regulation, and human-in-the-loop AI , 2022, Commun. ACM.

[15]  S. Basagni,et al.  OpenRAN Gym: An Open Toolbox for Data Collection and Experimentation with AI in O-RAN , 2022, 2022 IEEE Wireless Communications and Networking Conference (WCNC).

[16]  Tommaso Melodia,et al.  ChARM: NextG Spectrum Sharing Through Data-Driven Real-Time O-RAN Dynamic Control , 2022, IEEE INFOCOM 2022 - IEEE Conference on Computer Communications.

[17]  Yuedong Xu,et al.  Enabling Robust DRL-Driven Networking Systems via Teacher-Student Learning , 2022, IEEE Journal on Selected Areas in Communications.

[18]  Stefano Basagni,et al.  ColO-RAN: Developing Machine Learning-Based xApps for Open RAN Closed-Loop Control on Programmable Experimental Platforms , 2021, IEEE Transactions on Mobile Computing.

[19]  X. Costa,et al.  O-RAN: Disrupting the Virtualized RAN Ecosystem , 2021, IEEE Communications Standards Magazine.

[20]  Marcin Dryjański,et al.  Toward Modular and Flexible Open RAN Implementations in 6G Networks: Traffic Steering Use Case and O-RAN xApps , 2021, Sensors.

[21]  V. Ramaswamy,et al.  An O-RAN Approach to Spectrum Sharing Between Commercial 5G and Government Satellite Systems , 2021, MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM).

[22]  R. Piechocki,et al.  RLOps: Development Life-Cycle of Reinforcement Learning Aided Open RAN , 2021, IEEE Access.

[23]  Jacobus E. van der Merwe,et al.  NexRAN: Closed-loop RAN slicing in POWDER -A top-to-bottom open-source open-RAN use case , 2021, WiNTECH@MOBICOM.

[24]  S. Basagni,et al.  Colosseum, the world's largest wireless network emulator , 2021, MobiCom.

[25]  Fabien Couthouis,et al.  Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning With Shapley Values , 2021, IEEE Computational Intelligence Magazine.

[26]  Jiahai Yang,et al.  DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications , 2021, CCS.

[27]  Lei He,et al.  Explainable Deep Reinforcement Learning for UAV autonomous path planning , 2021 .

[28]  Peter Vamplew,et al.  Explainable reinforcement learning for broad-XAI: a conceptual framework and survey , 2021, Neural Computing and Applications.

[29]  Ajay Mahimkar,et al.  Auric: using data-driven recommendation to automatically generate cellular configuration , 2021, SIGCOMM.

[30]  Kun Kuang,et al.  Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning , 2021, KDD.

[31]  Osbert Bastani,et al.  Safe Reinforcement Learning with Nonlinear Dynamics via Model Predictive Shielding , 2021, 2021 American Control Conference (ACC).

[32]  Marco Canini,et al.  Analyzing Learning-Based Networked Systems with Formal Verification , 2021, IEEE INFOCOM 2021 - IEEE Conference on Computer Communications.

[33]  David A. Broniatowski Psychological Foundations of Explainability and Interpretability in Artificial Intelligence , 2021 .

[34]  Tuyen X. Tran,et al.  Streaming From the Air : Enabling Drone-Sourced Video Streaming Applications on 5G Open-RAN Architectures , 2021, IEEE Transactions on Mobile Computing.

[35]  T. Ho,et al.  Joint Server Selection, Cooperative Offloading and Handover in Multi-Access Edge Computing Wireless Network: A Deep Reinforcement Learning Approach , 2020, IEEE Transactions on Mobile Computing.

[36]  Antonios Tsourdos,et al.  Trustworthy Deep Learning in 6G-Enabled Mass Autonomy: From Concept to Quality-of-Trust Key Performance Indicators , 2020, IEEE Vehicular Technology Magazine.

[37]  Ying-Chang Liang,et al.  Joint Optimization of Handover Control and Power Allocation Based on Multi-Agent Deep Reinforcement Learning , 2020, IEEE Transactions on Vehicular Technology.

[38]  Weisi Guo Explainable Artificial Intelligence for 6G: Improving Trust between Human and Machine , 2020, IEEE Communications Magazine.

[39]  Eric M. S. P. Veith,et al.  Explainable Reinforcement Learning: A Survey , 2020, CD-MAKE.

[40]  Hugh Chen,et al.  From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.

[41]  Hongzi Mao,et al.  Interpreting Deep Learning-Based Networking Systems , 2019, SIGCOMM.

[42]  Marco Canini,et al.  Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents , 2019, NetAI@SIGCOMM.

[43]  Michael Schapira,et al.  Verifying Deep-RL-Driven Systems , 2019, NetAI@SIGCOMM.

[44]  Marco Gramaglia,et al.  vrAIn: A Deep Learning Approach Tailoring Computing and Radio Resources in Virtualized RANs , 2019, MobiCom.

[45]  David W. Aha,et al.  DARPA's Explainable Artificial Intelligence (XAI) Program , 2019, AI Mag..

[46]  Tim Miller,et al.  Explainable Reinforcement Learning Through a Causal Lens , 2019, AAAI.

[47]  Tim Oates,et al.  On the use of Deep Autoencoders for Efficient Embedded Reinforcement Learning , 2019, ACM Great Lakes Symposium on VLSI.

[48]  Xin Jin,et al.  Neural packet classification , 2019, SIGCOMM.

[49]  Ying-Chang Liang,et al.  Applications of Deep Reinforcement Learning in Communications and Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[50]  Yuedong Xu,et al.  Demystifying Deep Learning in Networking , 2018, APNet '18.

[51]  Xia Hu,et al.  Techniques for interpretable machine learning , 2018, Commun. ACM.

[52]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[53]  Chi Harold Liu,et al.  Experience-driven Networking: A Deep Reinforcement Learning based Approach , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.

[54]  Geoffrey Ye Li,et al.  Hypergraph Theory: Applications in 5G Heterogeneous Ultra-Dense Networks , 2017, IEEE Communications Magazine.

[55]  Ufuk Topcu,et al.  Safe Reinforcement Learning via Shielding , 2017, AAAI.

[56]  Hongzi Mao,et al.  Neural Adaptive Video Streaming with Pensieve , 2017, SIGCOMM.

[57]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[58]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[59]  Marco Tulio Ribeiro,et al.  “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.

[60]  Jennifer Neville,et al.  Attributed graph models: modeling network structure with correlated attributes , 2014, WWW.

[61]  Martin A. Riedmiller,et al.  Deep auto-encoder neural networks in reinforcement learning , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[62]  M. Falkner,et al.  A Survey on Intent-Based Networking , 2023, IEEE Communications Surveys & Tutorials.

[63]  Theophilus A. Benson,et al.  Configanator: A Data-driven Approach to Improving CDN Performance , 2022, NSDI.

[64]  Malte Schwarzkopf,et al.  How Reinforcement Learning Systems Fail and What to do About It , 2022 .

[65]  Madhusanka Liyanage,et al.  Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges , 2021, ArXiv.

[66]  Florian Lorber,et al.  Shield Synthesis for Reinforcement Learning , 2020, ISoLA.

[67]  Klaus-Robert Müller,et al.  Layer-Wise Relevance Propagation: An Overview , 2019, Explainable AI.