Broad Reinforcement Learning for Supporting Fast Autonomous IoT

The emergence of a massive Internet-of-Things (IoT) ecosystem is changing the human lifestyle. In several practical scenarios, IoT still faces significant challenges with reliance on human assistance and unacceptable response time for the treatment of big data. Therefore, it is very urgent to establish a new framework and algorithm to solve problems specific to this kind of fast autonomous IoT. Traditional reinforcement learning and deep reinforcement learning (DRL) approaches have abilities of autonomous decision making, but time-consuming modeling and training procedures limit their applications. To get over this dilemma, this article proposes the broad reinforcement learning (BRL) approach that fits fast autonomous IoT as it combines the broad learning system (BLS) with a reinforcement learning paradigm to improve the agent’s efficiency and accuracy of modeling and decision making. Specifically, a BRL framework is first constructed. Then, the associated learning algorithm, containing training pool introduction, training sample preparation, and incremental learning for BLS, is carefully designed. Finally, as a case study of fast autonomous IoT, the proposed BRL approach is applied to traffic light control, aiming to alleviate traffic congestion in the intersections of smart cities. The experimental results show that the proposed BRL approach can learn better action policy at a shorter execution time when compared with competing approaches.

[1]  Zhu Han,et al.  Joint Optimization of Caching, Computing, and Radio Resources for Fog-Enabled IoT Using Natural Actor–Critic Deep Reinforcement Learning , 2019, IEEE Internet of Things Journal.

[2]  Tarik Taleb,et al.  Energy and Delay Aware Task Assignment Mechanism for UAV-Based IoT Platform , 2019, IEEE Internet of Things Journal.

[3]  Edoardo Patti,et al.  A Distributed IoT Infrastructure to Test and Deploy Real-Time Demand Response in Smart Grids , 2019, IEEE Internet of Things Journal.

[4]  C. L. Philip Chen,et al.  Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[5]  Dongbin Zhao,et al.  Computational Intelligence in Urban Traffic Signal Control: A Survey , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[6]  Shuguang Cui,et al.  Reinforcement Learning-Based Multiaccess Control and Battery Prediction With Energy Harvesting in IoT Systems , 2018, IEEE Internet of Things Journal.

[7]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[8]  Song Guo,et al.  Resource Management at the Network Edge: A Deep Reinforcement Learning Approach , 2019, IEEE Network.

[9]  C. L. Philip Chen,et al.  Universal Approximation Capability of Broad Learning System and Its Structural Variations , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[10]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[11]  Qi Duan,et al.  A QoE-Driven Tactile Internet Architecture for Smart City , 2020, IEEE Network.

[12]  C. L. Philip Chen,et al.  Structured Manifold Broad Learning System: A Manifold Perspective for Large-Scale Chaotic Time Series Analysis and Prediction , 2019, IEEE Transactions on Knowledge and Data Engineering.

[13]  Riad Kanan,et al.  An IoT-based autonomous system for workers' safety in construction sites with real-time alarming, monitoring, and positioning strategies , 2018 .

[14]  Stefano Ermon,et al.  Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization , 2018, AISTATS.

[15]  Yuan Shen,et al.  Autonomous Navigation of UAVs in Large-Scale Complex Environments: A Deep Reinforcement Learning Approach , 2019, IEEE Transactions on Vehicular Technology.

[16]  Dan Wang,et al.  sTube: An Architecture for IoT Communication Sharing , 2018, IEEE Communications Magazine.

[17]  Mohsen Guizani,et al.  Semisupervised Deep Reinforcement Learning in Support of IoT and Smart City Services , 2018, IEEE Internet of Things Journal.

[18]  Mohammad Shahidehpour,et al.  A Hierarchical Framework for Intelligent Traffic Management in Smart Cities , 2019, IEEE Transactions on Smart Grid.

[19]  Kevin P. Murphy,et al.  Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.

[20]  Martin J. Murillo A Two-Factor Authorization Approach for Increasing the Resiliency of IoT-Based Autonomous Consumer-Oriented Systems , 2018, IEEE Internet of Things Journal.

[21]  Henk Wymeersch,et al.  Decentralized Scheduling for Cooperative Localization With Deep Reinforcement Learning , 2019, IEEE Transactions on Vehicular Technology.

[22]  Zhenjiang Dong,et al.  Seeing Isn’t Believing: QoE Evaluation for Privacy-Aware Users , 2019, IEEE Journal on Selected Areas in Communications.

[23]  Yu Zhang,et al.  Intelligent Cloud Resource Management with Deep Reinforcement Learning , 2018, IEEE Cloud Computing.

[24]  Mohsen Guizani,et al.  Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications , 2015, IEEE Communications Surveys & Tutorials.

[25]  Marc Peter Deisenroth,et al.  Deep Reinforcement Learning: A Brief Survey , 2017, IEEE Signal Processing Magazine.

[26]  Ahmad-Reza Sadeghi,et al.  AuDI: Toward Autonomous IoT Device-Type Identification Using Periodic Communication , 2019, IEEE Journal on Selected Areas in Communications.

[27]  Jianxin Chen,et al.  When Computation Hugs Intelligence: Content-Aware Data Processing for Industrial IoT , 2018, IEEE Internet of Things Journal.