A Time-Driven Workflow Scheduling Strategy for Reasoning Tasks of Autonomous Driving in Edge Environment

In various time-slots, the real-time reasoning tasks generated by autonomous vehicles are scheduled within tolerance time, which is the key problem to be solved in autonomous driving. Tasks are traditionally scheduled on the on-board unit (OBU), which leads to long completion time of tasks. Heuristic algorithms are widely used in task scheduling problems, which usually causes to premature convergence. Scheduling tasks in edge environment can effectively reduce completion time of tasks. In this paper, a workflow scheduling strategy was designed in edge environment according to the difference of reasoning tasks and the changes of edge nodes in various time-slots. Firstly, the model of Markov decision process (MDP) was built to describe the problem scenario, and the completion time of reasoning tasks was calculated by the workflow scheduling algorithm. Secondly, the Q-learning algorithm based on simulated anealing (SA-QL) was proposed to optimize the completion time of reasoning tasks. Finally, the performance of reinforcement learning algorithms based on simulated annealing (SA-RL) and particle swarm optimization (PSO) algorithm were comprehensively displayed from four perspectives: effectiveness, feasibility, exploration and convergence. The experimental results show that both SA-RL algorithms and PSO algorithm have good performance in feasibility and effectiveness. TD(0) algorithms have better performance of exploration and TD() algorithms have that of convergence.

[1]  Jiaqing Chen,et al.  A Time-Driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing , 2019, IEEE Transactions on Industrial Informatics.

[2]  Yun Yang,et al.  A novel directional and non-local-convergent particle swarm optimization based workflow scheduling in cloud-edge environment , 2019, Future Gener. Comput. Syst..

[3]  Manu Vardhan,et al.  Cost Effective Genetic Algorithm for Workflow Scheduling in Cloud Under Deadline Constraint , 2016, IEEE Access.

[4]  Roberto Cipolla,et al.  MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving , 2016, 2018 IEEE Intelligent Vehicles Symposium (IV).

[5]  Sergey Levine,et al.  Continuous Deep Q-Learning with Model-based Acceleration , 2016, ICML.

[6]  Richard S. Sutton,et al.  Learning to predict by the methods of temporal differences , 1988, Machine Learning.

[7]  Sagar Behere,et al.  A functional architecture for autonomous driving , 2015, 2015 First International Workshop on Automotive Software Architecture (WASA).

[8]  Marco Wiering,et al.  Reinforcement Learning and Markov Decision Processes , 2012, Reinforcement Learning.

[9]  Guo Mao RESEARCH ON Q-LEARNING ALGORITHM BASED ON METROPOLIS CRITERION , 2002 .

[10]  Arash Ghorbannia Delavar,et al.  HSGA: a hybrid heuristic algorithm for workflow scheduling in cloud systems , 2013, Cluster Computing.

[11]  Riccardo Poli,et al.  Particle swarm optimization , 1995, Swarm Intelligence.

[12]  Xiaohui Liu,et al.  Evolutionary Multi-Objective Workflow Scheduling in Cloud , 2016, IEEE Transactions on Parallel and Distributed Systems.

[13]  Wang Hongan,et al.  Graph-Based Auto-Driving Reasoning Task Scheduling , 2017 .

[14]  Hang Liu,et al.  Multi-Objective Workflow Scheduling With Deep-Q-Network-Based Multi-Agent Reinforcement Learning , 2019, IEEE Access.

[15]  Florin Pop,et al.  New scheduling approach using reinforcement learning for heterogeneous distributed systems , 2017, J. Parallel Distributed Comput..

[16]  Rüdiger Dillmann,et al.  Using case-based reasoning for autonomous vehicle guidance , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.