ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2020, Vol. 57 ›› Issue (3): 576-589.doi: 10.7544/issn1000-1239.2020.20190159

• 人工智能 • 上一篇    下一篇

一种权重平均值的深度双Q网络方法

吴金金1,刘全1,2,3,4,陈松1,闫岩1   

  1. 1(苏州大学计算机科学与技术学院 江苏苏州 215006);2(符号计算与知识工程教育部重点实验室(吉林大学) 长春 130012);3(江苏省计算机信息处理技术重点实验室(苏州大学) 江苏苏州 215006);4(软件新技术与产业化协同创新中心(南京大学) 南京 210023) (20174227020@stu.suda.edu.cn)
  • 出版日期: 2020-03-01
  • 基金资助: 
    国家自然科学基金项目(61772355,61702055,61502323,61502329);江苏省高等学校自然科学研究重大项目(18KJA520011,17KJA520004);吉林大学符号计算与知识工程教育部重点实验室项目(93K172014K04,93K172017K18);苏州市应用基础研究计划工业项目(SYG201422);江苏高校优势学科建设工程资助项目

Averaged Weighted Double Deep Q-Network

Wu Jinjin1, Liu Quan1,2,3,4, Chen Song1, Yan Yan1   

  1. 1(School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006);2(Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun 130012);3(Jiangsu Key Laboratory of Computer Information Processing Technology (Soochow University), Suzhou, Jiangsu 215006);4(Collaborative Innovation Center of Novel Software Technology and Industrialization (Nanjing University), Nanjing 210023)
  • Online: 2020-03-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61772355, 61702055, 61502323, 61502329), the Jiangsu Provincial Natural Science Research University Major Projects (18KJA520011, 17KJA520004), the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (Jilin University) (93K172014K04, 93K172017K18), the Suzhou Industrial Application of Basic Research Program (SYG201422), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.

摘要: 深度强化学习算法的不稳定性和可变性对其性能有重要的影响.深度Q网络模型在处理需要感知高维输入数据的决策控制任务中性能良好.然而,深度Q网络存在着高估动作值使agent性能变差的问题.尽管深度双Q网络能够缓解高估带来的影响,但是仍然存在低估动作值的问题.在一些复杂的强化学习环境中,即使是很小的估计误差也会对学习到的策略产生很大影响.为了解决深度Q网络中高估动作值和深度双Q网络中低估动作值的问题,提出一种基于权重平均值的深度双Q网络方法(averaged weighted double deep Q-network, AWDDQN),该方法将带权重的双估计器整合到深度双Q网络中.为了进一步地减少目标值的估计误差,通过计算之前学习到的动作估计值的平均值来产生目标值,并且根据时间差分误差动态地确定平均动作值的数量.实验结果表明:AWDDQN方法可以有效减少估计偏差,并且能够提升agent在部分Atari 2600游戏中的表现.

关键词: 深度强化学习, 深度Q网络, 估计误差, 权重双估计器, 时间差分

Abstract: The instability and variability of deep reinforcement learning algorithms have an important effect on their performance. Deep Q-Network is the first algorithm to combine deep neural networks with Q-learning successfully. It is proved that deep Q-Network can perform human-level control for handling problems requiring both rich perception of high-dimensional raw inputs and policy control. However, deep Q-Network has the problem of overestimating the action value and such overestimation can degrade the performance of agent. Although double deep Q-Network is proposed to mitigate the impact of overestimation, it still exists the problem of underestimating the value of the action. In some complex reinforcement learning environments, even a small estimation error may have a large impact on the learned policy. In this paper, in order to solve the problem of overestimating the action value in deep Q-Network and the underestimation of the action value in double deep Q-Network, a new deep reinforcement learning framework is proposed-AWDDQN, which integrates the newly proposed weighted double estimator into double deep Q-Network. In order to reduce the estimation error of the target value, the average value of the previously learned action estimation values is calculated to generate a target value and the number of average action values is dynamically determined based on the temporal difference error. The experimental results show that AWDDQN can effectively reduce the bias and can enhance agent’s performance in some Atari 2600 games.

Key words: deep reinforcement learning, deep Q-network, estimation error, weighted double estimator, temporal difference

中图分类号: