Latency-Aware Deployment of IoT Services in a Cloud-Edge Environment

Efficient scheduling of data elements and computation units can help to reduce the latency of processing big IoT stream data. In many cases, moving computation turns out to be more cost-effective than moving data. However, deploying computations from cloud-end to edge devices may face two difficult situations. First, edge devices usually have limited computing power as well as storage capability, and we need to selectively schedule computation tasks. Secondly, the overhead of stream data processing varies over time and makes it necessary to adaptively adjust service deployment at runtime. In this work, we propose a heuristics approach to adaptively deploying services at runtime. The effectiveness of the proposed approach is demonstrated by examining real cases of China’s State Power Grid.

[1]  Shen Su,et al.  A Proactive Service Model Facilitating Stream Data Fusion and Correlation , 2017, Int. J. Web Serv. Res..

[2]  Bingsheng He,et al.  Comet: batched stream processing for data intensive distributed computing , 2010, SoCC '10.

[3]  Sheng Huang,et al.  EAaaS: Edge Analytics as a Service , 2017, 2017 IEEE International Conference on Web Services (ICWS).

[4]  Blesson Varghese,et al.  Edge-as-a-Service: Towards Distributed Cloud Architectures , 2017, PARCO.

[5]  Yanbo Han,et al.  Seamless Integration of Cloud and Edge with a Service-Based Approach , 2018, 2018 IEEE International Conference on Web Services (ICWS).

[6]  Laurent Lefèvre,et al.  Latency-Aware Placement of Data Stream Analytics on Edge Computing , 2018, ICSOC.

[7]  Yogesh L. Simmhan,et al.  ECHO: An Adaptive Orchestration Platform for Hybrid Dataflows across Cloud and Edge , 2017, ICSOC.

[8]  Weisong Shi,et al.  Edge Computing: Vision and Challenges , 2016, IEEE Internet of Things Journal.

[9]  Ejaz Ahmed,et al.  A survey on mobile edge computing , 2016, 2016 10th International Conference on Intelligent Systems and Control (ISCO).