ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2020, Vol. 57 ›› Issue (7): 1539-1554.doi: 10.7544/issn1000-1239.2020.20190291

• 网络技术 • 上一篇    下一篇

基于深度强化学习的移动边缘计算任务卸载研究

卢海峰,顾春华,罗飞,丁炜超,杨婷,郑帅   

  1. (华东理工大学信息科学与工程学院 上海 200237) (1771097725@qq.com)
  • 出版日期: 2020-07-01
  • 基金资助: 
    国家自然科学基金项目(61472139);华东理工大学教育教学规律与方法研究项目(ZH1726107)

Research on Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing

Lu Haifeng, Gu Chunhua, Luo Fei, Ding Weichao, Yang Ting, Zheng Shuai   

  1. (School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237)
  • Online: 2020-07-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61472139) and the Educational Teaching Law and Method Research Project of East China University of Science and Technology (ZH1726107).

摘要: 在移动边缘计算中,本地设备可以将任务卸载到靠近网络边缘的服务器上进行数据存储和计算处理,以此降低业务服务的延迟和功耗,因此任务卸载决策具有很大的研究价值.首先构建了大规模异构移动边缘计算中具有多服务节点和移动任务内部具有多依赖关系的卸载模型;随后结合移动边缘计算的实际应用场景,提出利用改进的深度强化学习算法优化任务卸载策略;最后通过综合比较任务卸载策略的能耗、成本、负载均衡、延迟、网络使用量和平均执行时间等指标,分析了各卸载策略的优缺点.仿真实验结果表明,基于长短期记忆(long short-term memory, LSTM)网络和事后经验回放(hindsight experience replay, HER)改进的HERDRQN算法在能耗、费用、负载均衡和延迟上都有很好的效果.另外利用各算法策略对一定数量的应用进行卸载,通过比较异构设备在不同CPU利用率下的数量分布来验证卸载策略与各评价指标之间的关系,以此证明HERDRQN算法生成的策略在解决任务卸载问题中的科学性和有效性.

关键词: 移动边缘计算, 任务卸载, 深度强化学习, 长短期记忆网络, 事后经验回放

Abstract: In the mobile edge computing, the local device can offload tasks to the server near the edge of the network for data storage and computation processing, thereby reducing the delay and power consumption of the service. Therefore, the task offloading decision has great research value. This paper first constructs an offloading model with multi-service nodes and multi-dependencies within mobile tasks in large-scale heterogeneous mobile edge computing. Then, an improved deep reinforcement learning algorithm is proposed to optimize the task offloading strategy by combining the actual application scenarios of mobile edge computing. Finally, the advantages and disadvantages of each offloading strategy are analyzed by comprehensively comparing the energy consumption, cost, load balancing, delay, network usage and average execution time. The simulation results show that the improved HERDRQN algorithm based on long short-term memory (LSTM) network and HER (hindsight experience replay) has good effects on energy consumption, cost, load balancing and delay. In addition, this paper uses various algorithm strategies to offload a certain number of applications, and compares the number distribution of heterogeneous devices under different CPU utilizations to verify the relationship between the offloading strategy and each evaluation index, so as to prove that the strategy generated by HERDRQN algorithm is scientific and effective in solving the task offloading problem.

Key words: mobile edge computing, task offloading, deep reinforcement learning, long short-term memory (LSTM) network, hindsight experience replay (HER)

中图分类号: