Distributed Task Migration Optimization in MEC by Deep Reinforcement Learning Strategy

Mobile management is a challenging technology in Mobile Edge Computing (MEC). When the device is moving, computation tasks need to be dynamically migrated between multiple edge servers to maintain service continuity. This paper proposes a migration optimization of distributed task in MEC by deep reinforcement learning solution to optimize delay. In Multi-agent Deep Reinforcement Learning (MADRL), we construct an adaptive weight deep deterministic policy gradient (AWDDPG) algorithm to optimize the migration cost and service delay, and adopt centralized training and distributed execution to solve the high-dimensional problem. Experiments show that our algorithm greatly reduces the service delay compared with the related algorithms.