ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2019, Vol. 56 ›› Issue (8): 1708-1720.doi: 10.7544/issn1000-1239.2019.20190155

所属专题: 2019人工智能前沿进展专题

• 人工智能 • 上一篇    下一篇

基于经验指导的深度确定性多行动者-评论家算法

陈红名1,刘全1,2,3,4,闫岩1,何斌1,姜玉斌1,张琳琳1   

  1. 1(苏州大学计算机科学与技术学院 江苏苏州 215006);2(江苏省计算机信息处理技术重点实验室(苏州大学) 江苏苏州 215006);3(符号计算与知识工程教育部重点实验室(吉林大学) 长春 130012);4(软件新技术与产业化协同创新中心 南京 210000) (20174227007@stu.suda.edu.cn)
  • 出版日期: 2019-08-01
  • 基金资助: 
    国家自然科学基金项目(61772355,61702055,61472262,61502323,61502329);江苏省高等学校自然科学研究重大项目(18KJA520011,17KJA520004);苏州市应用基础研究计划工业部分项目(SYG201422)

An Experience-Guided Deep Deterministic Actor-Critic Algorithm with Multi-Actor

Chen Hongming1, Liu Quan1,2,3,4, Yan Yan1, He Bin1, Jiang Yubin1, Zhang Linlin1   

  1. 1(School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006);2(Provincial Key Laboratory for Computer Information Processing Technology (Soochow University), Suzhou, Jiangsu 215006);3(Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun, 130012);4(Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210000)
  • Online: 2019-08-01

摘要: 连续控制问题一直是强化学习研究的一个重要方向.近些年深度学习的发展以及确定性策略梯度(deterministic policy gradients, DPG)算法的提出,为解决连续控制问题提供了很多好的思路.这类方法大多在动作空间中加入外部噪声源进行探索,但是它们在一些连续控制任务中的表现并不是很好.为更好地解决探索问题,提出了一种基于经验指导的深度确定性多行动者-评论家算法(experience-guided deep deterministic actor-critic with multi-actor, EGDDAC-MA),该算法不需要外部探索噪声,而是从自身优秀经验中学习得到一个指导网络,对动作选择和值函数的更新进行指导.此外,为了缓解网络学习的波动性,算法使用多行动者-评论家模型,模型中的多个行动者网络之间互不干扰,各自执行情节的不同阶段.实验表明:相比于DDPG,TRPO和PPO算法,EGDDAC-MA算法在GYM仿真平台中的大多数连续任务中有更好的表现.

关键词: 强化学习, 深度强化学习, 确定性行动者-评论家, 经验指导, 专家指导, 多行动者

Abstract: The continuous control task has always been an important research direction in reinforce-ment learning. In recent years, the development of deep learning (DL) and the advent of deterministic policy gradients algorithm (DPG), provide many good ideas for solving continuous control problems. The main difficulty faced by these methods is the exploration in the continuous action space. And some of them engage in exploratory behavior through external noise injection in the action space. However, this exploration method does not perform well in some continuous control tasks. This paper proposes an experience-guided deep deterministic actor-critic algorithm with multi-actor (EGDDAC-MA) without external noise, which learns a guiding network from excellent experiences to guide the updates of the actor network and the critic network. Besides, it uses a multi-actor actor-critic (AC) model which configures different actors for each phase in an episode. These actors are independent of each other and do not interfere with each other. Finally, the experimental results show that compared with DDPG, TRPO and PPO algorithms, the proposed algorithm has better performance in most continuous tasks in GYM simulation platform.

Key words: reinforcement learning, deep reinforcement learning, deterministic actor-critic, experience guiding, expert guiding, multi-actor

中图分类号: