Deep Reinforcement Learning for Communication Flow Control in Wireless Mesh Networks

Wireless mesh network (WMN) is one of the most promising technologies for Internet of Things (IoT) applications because of its self-adaptive and self-organization nature. To meet different performance requirements on communications in WMNs, traditional approaches always have to program flow control strategies in an explicit way. In this case, the performance of WMNs will be significantly affected by the dynamic properties of underlying networks in real applications. With providing a more flexible solution in mind, in this article, for the first time, we present how we can apply emerging Deep Reinforcement Learning (DRL) on communication flow control in WMNs. Moreover, different from a general DRL based networking solution, in which the network properties are pre-defined, we leverage the adaptive nature of WMNs and propose a self-adaptive DRL approach. Specifically, our method can reconstruct a WMN during the training of a DRL model. In this way, the trained DRL model can capture more properties of WMNs and achieve better performance. As a proof of concept, we have implemented our method with a self-adap-tive Deep Q-learning Network (DQN) model. The evaluation results show that the presented solution can significantly improve the communication performance of data flows in WMNs, compared to a static benchmark solution.

[1]  Muhammad Alam,et al.  An Energy-Efficient and Congestion Control Data-Driven Approach for Cluster-Based Sensor Network , 2018, Mobile Networks and Applications.

[2]  Rem W. Collier,et al.  A Survey of Clustering Techniques in WSNs and Consideration of the Challenges of Applying Such to 5G IoT Scenarios , 2017, IEEE Internet of Things Journal.

[3]  Geoffrey Ye Li,et al.  Deep Reinforcement Learning Based Resource Allocation for V2V Communications , 2018, IEEE Transactions on Vehicular Technology.

[4]  Ying-Chang Liang,et al.  Applications of Deep Reinforcement Learning in Communications and Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[5]  Tom Schaul,et al.  Rainbow: Combining Improvements in Deep Reinforcement Learning , 2017, AAAI.

[6]  Xiaohong Huang,et al.  Deep Reinforcement Learning for Multimedia Traffic Control in Software Defined Networking , 2018, IEEE Network.

[7]  Neeraj Kumar,et al.  A systematic review on clustering and routing techniques based upon LEACH protocol for wireless sensor networks , 2013, J. Netw. Comput. Appl..

[8]  Richard D. Gitlin,et al.  A Clustering Algorithm That Maximizes Throughput in 5G Heterogeneous F-RAN Networks , 2018, 2018 IEEE International Conference on Communications (ICC).

[9]  Nei Kato,et al.  State-of-the-Art Deep Learning: Evolving Machine Intelligence Toward Tomorrow’s Intelligent Network Traffic Control Systems , 2017, IEEE Communications Surveys & Tutorials.

[10]  Bhaskar Krishnamachari,et al.  Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks , 2018, IEEE Transactions on Cognitive Communications and Networking.

[11]  Hamed Haddadi,et al.  Deep Learning in Mobile and Wireless Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[12]  K. P. Kaliyamurthie,et al.  Clustering Uncertain Data Using Voronoi Diagrams and R-Tree Index , 2014 .

[13]  Arun Kumar Sangaiah,et al.  Survey on clustering in heterogeneous and homogeneous wireless sensor networks , 2017, The Journal of Supercomputing.

[14]  Soung Chang Liew,et al.  Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks , 2017, 2018 IEEE International Conference on Communications (ICC).

[15]  John Murphy,et al.  Deep Reinforcement Learning for IoT Network Dynamic Clustering in Edge Computing , 2019, 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID).