A reinforcement learning approach to improve communication performance and energy utilization in fog-based IoT

Recent research has shown the potential of using available mobile fog devices (such as smartphones, drones, domestic and industrial robots) as relays to minimize communication outages between sensors and destination devices, where localized Internet-of-Things services (e.g., manufacturing process control, health and security monitoring) are delivered. However, these mobile relays deplete energy when they move and transmit to distant destinations. As such, power-control mechanisms and intelligent mobility of the relay devices are critical in improving communication performance and energy utilization. In this paper, we propose a Q-learning-based decentralized approach where each mobile fog relay agent (MFRA) is controlled by an autonomous agent which uses reinforcement learning to simultaneously improve communication performance and energy utilization. Each autonomous agent learns based on the feedback from the destination and its own energy levels whether to remain active and forward the message, or become passive for that transmission phase. We evaluate the approach by comparing with the centralized approach, and observe that with lesser number of MFRAs, our approach is able to ensure reliable delivery of data and reduce overall energy cost by 56.76% – 88.03%.

[1]  Shahzad A. Malik,et al.  An Optimal Relay Scheme for Outage Minimization in Fog-Based Internet-of-Things (IoT) Networks , 2019, IEEE Internet of Things Journal.

[2]  Shahzad A. Malik,et al.  Fog/Edge Computing-Based IoT (FECIoT): Architecture, Applications, and Research Issues , 2019, IEEE Internet of Things Journal.

[3]  Amin Azari,et al.  Self-Organized Low-Power IoT Networks: A Distributed Learning Approach , 2018, 2018 IEEE Global Communications Conference (GLOBECOM).

[4]  Eduardo F. Morales,et al.  An Introduction to Reinforcement Learning , 2011 .

[5]  Koji Ishibashi,et al.  Robust Relay Selection for Large-Scale Energy-Harvesting IoT Networks , 2017, IEEE Internet of Things Journal.

[6]  Michel Kadoch,et al.  Relay Technology for 5G Networks and IoT Applications , 2017 .

[7]  Xiaoyun Zhang,et al.  An Energy-Efficient Relaying Scheme for Internet of Things Communications , 2018, 2018 IEEE International Conference on Communications (ICC).

[8]  Cristina Cano,et al.  Implications of decentralized Q-learning resource allocation in wireless networks , 2017, 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC).

[9]  Tiejun Lv,et al.  Optimization of the Energy-Efficient Relay-Based Massive IoT Network , 2018, IEEE Internet of Things Journal.

[10]  Andrei V. Gurtov,et al.  DEMO: Mobile Relay Architecture for Low-Power IoT Devices , 2018, 2018 IEEE 19th International Symposium on "A World of Wireless, Mobile and Multimedia Networks" (WoWMoM).

[11]  M. Mahdavi,et al.  A New Relay Policy in RF Energy Harvesting for IoT Networks—A Cooperative Network Approach , 2018, IEEE Internet of Things Journal.

[12]  Gabriel-Miro Muntean,et al.  A Relay and Mobility Scheme for QoS Improvement in IoT Communications , 2018, 2018 IEEE International Conference on Communications Workshops (ICC Workshops).

[13]  Tao Zhang,et al.  Fog and IoT: An Overview of Research Opportunities , 2016, IEEE Internet of Things Journal.

[14]  Luiz A. DaSilva,et al.  Using Deep Q-Learning to Prolong the Lifetime of Correlated Internet of Things Devices , 2019, 2019 IEEE International Conference on Communications Workshops (ICC Workshops).

[15]  Maxime Guériau,et al.  SAMoD: Shared Autonomous Mobility-on-Demand using Decentralized Reinforcement Learning , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).