Flat and hierarchical system deployment for edge computing systems

Abstract In this paper, we consider the server allocation problem for edge computing system deployment where each edge cloud is modeled as an M/M/c queue. Our goal is to minimize the overall average system response time of application requests generated by all mobile devices/users. We consider two approaches for edge cloud deployment: the flat deployment, where all edge clouds are co-located with the base stations, and the hierarchical deployment, where edge clouds can be co-located with other system components besides the base stations. In flat deployment, we demonstrate that the allocation of edge cloud servers should be balanced across all the base stations, if the application request arrival rates at the base stations are equal to each other; if the application request arrival rates are not the same, we propose a Largest Weighted Reduction Time First (LWRTF) algorithm to assign servers to edge clouds. Numerical comparisons of the proposed algorithm against several other reasonably designed heuristics verify that algorithm LWRTF has very good performances in terms of minimizing the average system response time. By theoretical analysis and numerical evaluations, we also show that, the hierarchical deployment approach has great potentials in minimizing the overall average system response time compared to the flat deployment approach. We also investigate the server allocation problem in hierarchical deployment and derive important insights to guide practical edge cloud server allocation in real-world systems.

[1]  Nirwan Ansari,et al.  Mobile Edge Computing Empowers Internet of Things , 2017, SENSORNETS.

[2]  Jian Song,et al.  Software Defined Cooperative Offloading for Mobile Cloudlets , 2017, IEEE/ACM Transactions on Networking.

[3]  Paramvir Bahl,et al.  The Case for VM-Based Cloudlets in Mobile Computing , 2009, IEEE Pervasive Computing.

[4]  Tarik Taleb,et al.  On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration , 2017, IEEE Communications Surveys & Tutorials.

[5]  Samee Ullah Khan,et al.  Potentials, trends, and prospects in edge technologies: Fog, cloudlet, mobile edge, and micro data centers , 2018, Comput. Networks.

[6]  Teruo Higashino,et al.  Edge-centric Computing: Vision and Challenges , 2015, CCRV.

[7]  Weifa Liang,et al.  Efficient Algorithms for Capacitated Cloudlet Placements , 2016, IEEE Transactions on Parallel and Distributed Systems.

[8]  Tao Jiang,et al.  Toward Pre-Empted EV Charging Recommendation Through V2V-Based Reservation System , 2021, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[9]  Zdenek Becvar,et al.  Mobile Edge Computing: A Survey on Architecture and Computation Offloading , 2017, IEEE Communications Surveys & Tutorials.

[10]  Leonard Kleinrock,et al.  Queueing Systems: Volume I-Theory , 1975 .

[11]  Nirwan Ansari,et al.  Green Energy Aware Avatar Migration Strategy in Green Cloudlet Networks , 2015, 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom).

[12]  Xing Zhang,et al.  A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications , 2017, IEEE Access.

[13]  Long Chen,et al.  QUICK: QoS-guaranteed efficient cloudlet placement in wireless metropolitan area networks , 2018, The Journal of Supercomputing.

[14]  Wenzhong Li,et al.  Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing , 2015, IEEE/ACM Transactions on Networking.

[15]  Weifa Liang,et al.  Optimal Cloudlet Placement and User to Cloudlet Allocation in Wireless Metropolitan Area Networks , 2017, IEEE Transactions on Cloud Computing.

[16]  Shibo He,et al.  DRAIM: A Novel Delay-Constraint and Reverse Auction-Based Incentive Mechanism for WiFi Offloading , 2020, IEEE Journal on Selected Areas in Communications.

[17]  Victor C. M. Leung,et al.  Predicting Temporal Social Contact Patterns for Data Forwarding in Opportunistic Mobile Networks , 2017, IEEE Transactions on Vehicular Technology.