高级检索
    陈红名, 刘全, 闫岩, 何斌, 姜玉斌, 张琳琳. 基于经验指导的深度确定性多行动者-评论家算法[J]. 计算机研究与发展, 2019, 56(8): 1708-1720. DOI: 10.7544/issn1000-1239.2019.20190155
    引用本文: 陈红名, 刘全, 闫岩, 何斌, 姜玉斌, 张琳琳. 基于经验指导的深度确定性多行动者-评论家算法[J]. 计算机研究与发展, 2019, 56(8): 1708-1720. DOI: 10.7544/issn1000-1239.2019.20190155
    Chen Hongming, Liu Quan, Yan Yan, He Bin, Jiang Yubin, Zhang Linlin. An Experience-Guided Deep Deterministic Actor-Critic Algorithm with Multi-Actor[J]. Journal of Computer Research and Development, 2019, 56(8): 1708-1720. DOI: 10.7544/issn1000-1239.2019.20190155
    Citation: Chen Hongming, Liu Quan, Yan Yan, He Bin, Jiang Yubin, Zhang Linlin. An Experience-Guided Deep Deterministic Actor-Critic Algorithm with Multi-Actor[J]. Journal of Computer Research and Development, 2019, 56(8): 1708-1720. DOI: 10.7544/issn1000-1239.2019.20190155

    基于经验指导的深度确定性多行动者-评论家算法

    An Experience-Guided Deep Deterministic Actor-Critic Algorithm with Multi-Actor

    • 摘要: 连续控制问题一直是强化学习研究的一个重要方向.近些年深度学习的发展以及确定性策略梯度(deterministic policy gradients, DPG)算法的提出,为解决连续控制问题提供了很多好的思路.这类方法大多在动作空间中加入外部噪声源进行探索,但是它们在一些连续控制任务中的表现并不是很好.为更好地解决探索问题,提出了一种基于经验指导的深度确定性多行动者-评论家算法(experience-guided deep deterministic actor-critic with multi-actor, EGDDAC-MA),该算法不需要外部探索噪声,而是从自身优秀经验中学习得到一个指导网络,对动作选择和值函数的更新进行指导.此外,为了缓解网络学习的波动性,算法使用多行动者-评论家模型,模型中的多个行动者网络之间互不干扰,各自执行情节的不同阶段.实验表明:相比于DDPG,TRPO和PPO算法,EGDDAC-MA算法在GYM仿真平台中的大多数连续任务中有更好的表现.

       

      Abstract: The continuous control task has always been an important research direction in reinforce-ment learning. In recent years, the development of deep learning (DL) and the advent of deterministic policy gradients algorithm (DPG), provide many good ideas for solving continuous control problems. The main difficulty faced by these methods is the exploration in the continuous action space. And some of them engage in exploratory behavior through external noise injection in the action space. However, this exploration method does not perform well in some continuous control tasks. This paper proposes an experience-guided deep deterministic actor-critic algorithm with multi-actor (EGDDAC-MA) without external noise, which learns a guiding network from excellent experiences to guide the updates of the actor network and the critic network. Besides, it uses a multi-actor actor-critic (AC) model which configures different actors for each phase in an episode. These actors are independent of each other and do not interfere with each other. Finally, the experimental results show that compared with DDPG, TRPO and PPO algorithms, the proposed algorithm has better performance in most continuous tasks in GYM simulation platform.

       

    /

    返回文章
    返回