Real-Time state-dependent routing based on user perception

In order to successfully resolve the network infrastructure's problems the network provider has to improve the service quality. However in traditional ways, maintaining and improving of the service quality are generally determined in terms of quality of service criteria, not in terms of satisfaction and perception to the end-user. The latter is represented by Quality of Experience (QoE) that becomes recently the most important tendency to guarantee the quality of network services. QoE represents the subjective perception of end-users using network services with network functions such as admission control, resource management, routing, traffic control, etc. In this paper, we focus on routing mechanism driven by QoE end-users. Today, NP-complete is one of the most routing algorithm problems when trying to satisfy multi QoS constraints criteria simultaneously. In order to avoid the classification problem of these multiple criteria reducing the complexity problem for the future Internet, we propose two protocols based on user QoE measurement in routing paradigm to construct an adaptive and evolutionary system. Our first approach is a routing driven by terminal QoE basing on a least squares reinforcement learning technique called Least Squares Policy Iteration. The second approach, namely QQAR (QoE Q-learning based Adaptive Routing), is a improvement of the first one. QQAR basing on Q-Learning, a Reinforcement Learning algorithm, uses Pseudo Subjective Quality Assessment (PSQA), a real-time QoE assessment tool based on Random Neural Network, to evaluate QoE. Experimental results showed a significant performance against over other traditional routing protocols.

[1]  Said Hoceini,et al.  Average-Bandwidth Delay Q-Routing Adaptive Algorithm , 2008, 2008 IEEE International Conference on Communications.

[2]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[3]  Peter Dayan,et al.  Technical Note: Q-Learning , 2004, Machine Learning.

[4]  Michail G. Lagoudakis,et al.  Least-Squares Policy Iteration , 2003, J. Mach. Learn. Res..

[5]  Andrew W. Moore,et al.  Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..

[6]  Adlen Ksentini,et al.  An Adaptive Mechanism for Multipath Video Streaming over Video Distribution Network (VDN) , 2009, 2009 First International Conference on Advances in Multimedia.

[7]  Gerardo Rubino,et al.  Quantifying the Quality of Audio and Video Transmissions over the Internet: The PSQA Approach , 2006 .

[8]  Sugato Chakravarty,et al.  Methodology for the subjective assessment of the quality of television pictures , 1995 .

[9]  Michael L. Littman,et al.  Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach , 1993, NIPS.

[10]  Sherali Zeadally,et al.  Design and performance analysis of an inductive QoS routing algorithm , 2009, Comput. Commun..

[11]  Jon Crowcroft,et al.  Quality-of-Service Routing for Supporting Multimedia Applications , 1996, IEEE J. Sel. Areas Commun..

[12]  Maarten Wijnants,et al.  End-to-end QoE Optimization Through Overlay Network Deployment , 2008, 2008 International Conference on Information Networking.

[13]  Abdelhamid Mellouk,et al.  QoE Model Driven for Network Services , 2010, WWIC.

[14]  Leonid Peshkin,et al.  Reinforcement learning for adaptive routing , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).

[15]  S. Hemminger Network Emulation with NetEm , 2022 .