SDN Flow Entry Management Using Reinforcement Learning

Modern information technology services largely depend on cloud infrastructures to provide their services. These cloud infrastructures are built on top of Datacenter Networks (DCNs) constructed with high-speed links, fast switching gear, and redundancy to offer better flexibility and resiliency. In this environment, network traffic includes long-lived (elephant) and short-lived (mice) flows with partitioned/aggregated traffic patterns. Although SDN-based approaches can efficiently allocate networking resources for such flows, the overhead due to network reconfiguration can be significant. With limited capacity of Ternary Content-Addressable Memory (TCAM) deployed in an OpenFlow enabled switch, it is crucial to determine which forwarding rules should remain in the flow table and which rules should be processed by the SDN controller in case of a table-miss on the SDN switch. This is needed in order to obtain the flow entries that satisfy the goal of reducing the long-term control plane overhead introduced between the controller and the switches. To achieve this goal, we propose a machine learning technique that utilizes two variations of Reinforcement Learning (RL) algorithms—the first of which is a traditional RL-based algorithm, while the other is deep reinforcement learning-based. Emulation results using the RL algorithm show around 60% improvement in reducing the long-term control plane overhead and around 14% improvement in the table-hit ratio compared to the Multiple Bloom Filters (MBF) method, given a fixed size flow table of 4KB.

[1]  Sk. Noor Mahammad,et al.  Minimization of flow table for TCAM based openflow switches by virtual compression approach , 2013, 2013 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS).

[2]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[3]  Thierry Turletti,et al.  Rules Placement Problem in OpenFlow Networks: A Survey , 2016, IEEE Communications Surveys & Tutorials.

[4]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[5]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[6]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[7]  Nick McKeown,et al.  A network in a laptop: rapid prototyping for software-defined networks , 2010, Hotnets-IX.

[8]  Ian F. Akyildiz,et al.  QoS-Aware Adaptive Routing in Multi-layer Hierarchical Software Defined Networks: A Reinforcement Learning Approach , 2016, 2016 IEEE International Conference on Services Computing (SCC).

[9]  Martín Casado,et al.  Ethane: taking control of the enterprise , 2007, SIGCOMM '07.

[10]  David A. Maltz,et al.  Network traffic characteristics of data centers in the wild , 2010, IMC '10.

[11]  Rahul Desai,et al.  Cooperative reinforcement learning approach for routing in ad hoc networks , 2015, 2015 International Conference on Pervasive Computing (ICPC).

[12]  Bu-Sung Lee,et al.  An efficient flow cache algorithm with improved fairness in Software-Defined Data Center Networks , 2013, 2013 IEEE 2nd International Conference on Cloud Networking (CloudNet).

[13]  Lemin Li,et al.  Fast incremental flow table aggregation in SDN , 2014, 2014 23rd International Conference on Computer Communication and Networks (ICCCN).

[14]  David Walker,et al.  Infinite CacheFlow in software-defined networks , 2014, HotSDN.

[15]  Bhaskar Krishnamachari,et al.  Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks , 2018, IEEE Transactions on Cognitive Communications and Networking.

[16]  Subhasis Banerjee,et al.  Tag-In-Tag: Efficient flow table management in SDN switches , 2014, 10th International Conference on Network and Service Management (CNSM) and Workshop.

[17]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[18]  Nidal Nasser,et al.  Empowering networking research and experimentation through Software-Defined Networking , 2016, J. Netw. Comput. Appl..

[19]  Shinji Kobayashi,et al.  Flow-Aware Congestion Control to Improve Throughput under TCP Incast in Datacenter Networks , 2015, 2015 IEEE 39th Annual Computer Software and Applications Conference.

[20]  Jun Li,et al.  MDTC: An efficient approach to TCAM-based multidimensional table compression , 2015, 2015 IFIP Networking Conference (IFIP Networking).

[21]  Albert G. Greenberg,et al.  The nature of data center traffic: measurements & analysis , 2009, IMC '09.

[22]  Nick McKeown,et al.  OpenFlow: enabling innovation in campus networks , 2008, CCRV.

[23]  Jagrut Solanki A Reinforcement Learning Network based Novel Adaptive Routing Algorithm for Wireless Ad-Hoc Network , 2015 .

[24]  Kok-Lim Alvin Yau,et al.  Application of reinforcement learning to routing in distributed wireless networks: a review , 2013, Artificial Intelligence Review.

[25]  Yan Shi,et al.  TF-IdleTimeout: Improving efficiency of TCAM in SDN by dynamically adjusting flow entry lifecycle , 2016, 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC).

[26]  Trevor Darrell,et al.  Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.

[27]  Hyunseung Choo,et al.  Intelligent eviction strategy for efficient flow table management in OpenFlow Switches , 2016, 2016 IEEE NetSoft Conference and Workshops (NetSoft).

[28]  Shangxing Wang,et al.  Deep Reinforcement Learning for Dynamic Multichannel Access , 2017 .