Optimizing the resource usage of actor-based systems

Abstract Runtime environments for IoT data processing systems based on the actor model often apply a thread pool to serve data streams. In this paper, we propose an approach based on Reinforcement Learning (RL) to find a trade-off between the the resource (thread pool in server machines) usage and the quality of service for data streams. We compare our approach and the Thread Pool Executor of Akka, an open-source software toolkit. Simulation results show that our approach outperforms ThreadPoolExecutor with the timeout rule when the thread start times are not negligible. Furthermore, the tuning of our approach is not tedious as the application of the timeout rule requires.

[1]  Shijun Liu,et al.  A Reinforcement Learning Based Auto-Scaling Approach for SaaS Providers in Dynamic Cloud Environment , 2019 .

[2]  Samuel Kounev,et al.  Modeling of Aggregated IoT Traffic and Its Application to an IoT Cloud , 2019, Proceedings of the IEEE.

[3]  Fulvio Risso,et al.  An adaptive scaling mechanism for managing performance variations in network functions virtualization: A case study in an NFV-based EPC , 2017, 2017 13th International Conference on Network and Service Management (CNSM).

[4]  Xing Zhang,et al.  An Approach for Spatial-Temporal Traffic Modeling in Mobile Cellular Networks , 2015, 2015 27th International Teletraffic Congress.

[5]  Robert Simon Sherratt,et al.  Enabling actor model for crowd sensing and IoT , 2015, 2015 International Symposium on Consumer Electronics (ISCE).

[6]  Eui-nam Huh,et al.  Fog Computing Micro Datacenter Based Dynamic Resource Estimation and Pricing Model for IoT , 2015, 2015 IEEE 29th International Conference on Advanced Information Networking and Applications.

[7]  Ning-jiang Chen,et al.  A Dynamic Adjustment Mechanism with Heuristic for Thread Pool in Middleware , 2010, 2010 Third International Joint Conference on Computational Science and Optimization.

[8]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[9]  Jwan K. Alwan,et al.  IoT-based telemedicine for disease prevention and health promotion: State-of-the-Art , 2021, Journal of Network and Computer Applications.

[10]  Jaime Llorca,et al.  IoT-Cloud Service Optimization in Next Generation Smart Environments , 2016, IEEE Journal on Selected Areas in Communications.

[11]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[12]  Hee Yong Youn,et al.  A Novel Predictive and Self -- Adaptive Dynamic Thread Pool Management , 2011, 2011 IEEE Ninth International Symposium on Parallel and Distributed Processing with Applications.

[13]  Dewen Hu,et al.  Multiobjective Reinforcement Learning: A Comprehensive Overview , 2015, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[14]  Victor C. M. Leung,et al.  Network Slicing Based 5G and Future Mobile Networks: Mobility, Resource Management, and Challenges , 2017, IEEE Communications Magazine.

[15]  Jaydip Sen,et al.  Internet of Things - Applications and Challenges in Technology and Standardization , 2011 .

[16]  Ali Kashif Bashir,et al.  A Survey on Resource Management in IoT Operating Systems , 2018, IEEE Access.

[17]  Sergey Levine,et al.  Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection , 2016, Int. J. Robotics Res..

[18]  Husnu S. Narman,et al.  DA-DRLS: Drift adaptive deep reinforcement learning based scheduling for IoT resource management , 2019, J. Netw. Comput. Appl..

[19]  Yue Jin,et al.  Model-free resource management of cloud-based applications using reinforcement learning , 2018, 2018 21st Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN).

[20]  DongHyun Kang,et al.  Prediction-Based Dynamic Thread Pool Scheme for Efficient Resource Usage , 2008, 2008 IEEE 8th International Conference on Computer and Information Technology Workshops.

[21]  Peter Dayan,et al.  Technical Note: Q-Learning , 2004, Machine Learning.

[22]  Philipp Leitner,et al.  Resource Provisioning for IoT Services in the Fog , 2016, 2016 IEEE 9th International Conference on Service-Oriented Computing and Applications (SOCA).

[23]  Takeshi Ogasawara,et al.  Dynamic Thread Count Adaptation for Multiple Services in SMP Environments , 2008, 2008 IEEE International Conference on Web Services.

[24]  Wu He,et al.  Internet of Things in Industries: A Survey , 2014, IEEE Transactions on Industrial Informatics.

[25]  Marisol García-Valls,et al.  A Proposal for Cost-Effective Server Usage in CPS in the Presence of Dynamic Client Requests , 2016, 2016 IEEE 19th International Symposium on Real-Time Distributed Computing (ISORC).

[26]  J. V. Bibal Benifa,et al.  RLPAS: Reinforcement Learning-based Proactive Auto-Scaler for Resource Provisioning in Cloud Environment , 2018, Mobile Networks and Applications.

[27]  Konstantinos Vandikas,et al.  Computations on the Edge in the Internet of Things , 2015, ANT/SEIT.

[28]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[29]  Leonard Kleinrock,et al.  Internet congestion control using the power metric: Keep the pipe just full, but no fuller , 2018, Ad Hoc Networks.

[30]  Ji Li,et al.  DRL-cloud: Deep reinforcement learning-based resource provisioning and task scheduling for cloud service providers , 2018, 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC).

[31]  Petros Spachos,et al.  Machine Learning Based Solutions for Security of Internet of Things (IoT): A Survey , 2020, J. Netw. Comput. Appl..

[32]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..