Online Dispatching and Fair Scheduling of Edge Computing Tasks: A Learning-Based Approach

The emergence of edge computing can effectively tackle the problem of large transmission delays caused by the long-distance between user devices and remote cloud servers. Users can offload tasks to the nearby edge servers to perform computations, so as to minimize the average task response time through effective task dispatching and scheduling methods. However: 1) in the task dispatching phase, the dynamic features of network conditions and server loads make it difficult for the offloaded tasks to select the optimal edge server and 2) in the task scheduling phase, each edge server may face a large number of offloading tasks to schedule, resulting in long average task response time, or even severe task starvation. In this article, we propose an online task dispatching and fair scheduling method OTDS to tackle the above two challenges, which combines online learning (OL) and deep reinforcement learning (DRL) techniques. Specifically, using an OL approach, OTDS performs real-time estimating of network conditions and server loads, and then dynamically assigns tasks to the optimal edge servers accordingly. Meanwhile, at each edge server, by combing the round-robin mechanism with DRL, OTDS is able to allocate appropriate resources to each task according to its time sensitivity and achieve high efficiency and fairness in task scheduling. Evaluation results show that our online method can dynamically allocate network resources and computing resources to those offloaded tasks according to their time-sensitive requirements. Thus, OTDS outperforms the existing methods in terms of the efficiency and fairness on task dispatching and scheduling by significantly reducing the average task response time.