DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING

The evolution of communication technologies in the past few decades, has led to a huge increase in the complexity and the overall size of telecommunication networks. This phenomenon has increased the need for innovation in the field of Traffic Engineering (TE), as the already existing solutions are not flexible enough to adapt to these changes. With the appearance of 5G technologies, the urgency to revolutionize the field is higher than ever and the softwarization and virtualization of the infrastructure bring new possibilities for TE optimization, namely the possible use of Artificial Intelligence (AI) based methods for Traffic Management. The recent advances in AI have provided model-free optimization methods with algorithms like Deep Reinforcement Learning (DRL) that can be used to optimize traffic distributions in complex and hard to model Network scenarios. This thesis aims to provide a DRL-based solution for TE where an agent is capable of making routing decisions based on the current state of the network, with the goal of balancing the load between the network paths. A DRL agent is developed and trained in two different scenarios where the traffic that already exists in the network is generated randomly or according to a systematic pattern. A simulation environment was developed to train and evaluate the DRL agent.

[1]  Zehua Guo,et al.  A Scalable Deep Reinforcement Learning Approach for Traffic Engineering Based on Link Control , 2021, IEEE Communications Letters.

[2]  Mingwei Xu,et al.  A Multi-agent Reinforcement Learning Perspective on Distributed Traffic Engineering , 2020, 2020 IEEE 28th International Conference on Network Protocols (ICNP).

[3]  Wen-Guey Tzeng,et al.  RL-Routing: An SDN Routing Algorithm Based on Deep Reinforcement Learning , 2020, IEEE Transactions on Network Science and Engineering.

[4]  Ying Wang,et al.  Dynamically Split the Traffic in Software Defined Network Based on Deep Reinforcement Learning , 2020, 2020 International Wireless Communications and Mobile Computing (IWCMC).

[5]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[6]  Chris Amato,et al.  Learning Multi-Robot Decentralized Macro-Action-Based Policies via a Centralized Q-Net , 2019, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Ausif Mahmood,et al.  Review of Deep Learning Algorithms and Architectures , 2019, IEEE Access.

[8]  Pu Wang,et al.  Delay-Optimal Traffic Engineering through Multi-agent Reinforcement Learning , 2019, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[9]  Chi Harold Liu,et al.  Experience-driven Networking: A Deep Reinforcement Learning based Approach , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.

[10]  Paulo P. Monteiro,et al.  Lightpath admission control and rerouting in dynamic flex‐grid optical transport networks , 2017, Networks.

[11]  Seungmin Rho,et al.  Traffic engineering in software-defined networking: Measurement and management , 2016, IEEE Access.

[12]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[13]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[14]  Marc G. Bellemare,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[15]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[16]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.