ISSN 1000-1239 CN 11-1777/TP

• 人工智能 •

### 一种权重平均值的深度双Q网络方法

1. 1(苏州大学计算机科学与技术学院 江苏苏州 215006);2(符号计算与知识工程教育部重点实验室(吉林大学) 长春 130012);3(江苏省计算机信息处理技术重点实验室(苏州大学) 江苏苏州 215006);4(软件新技术与产业化协同创新中心(南京大学) 南京 210023) (20174227020@stu.suda.edu.cn)
• 出版日期: 2020-03-01
• 基金资助:
国家自然科学基金项目(61772355,61702055,61502323,61502329)；江苏省高等学校自然科学研究重大项目(18KJA520011,17KJA520004)；吉林大学符号计算与知识工程教育部重点实验室项目(93K172014K04,93K172017K18)；苏州市应用基础研究计划工业项目(SYG201422)；江苏高校优势学科建设工程资助项目

### Averaged Weighted Double Deep Q-Network

Wu Jinjin1, Liu Quan1,2,3,4, Chen Song1, Yan Yan1

1. 1(School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006);2(Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun 130012);3(Jiangsu Key Laboratory of Computer Information Processing Technology (Soochow University), Suzhou, Jiangsu 215006);4(Collaborative Innovation Center of Novel Software Technology and Industrialization (Nanjing University), Nanjing 210023)
• Online: 2020-03-01
• Supported by:
This work was supported by the National Natural Science Foundation of China (61772355, 61702055, 61502323, 61502329), the Jiangsu Provincial Natural Science Research University Major Projects (18KJA520011, 17KJA520004), the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (Jilin University) (93K172014K04, 93K172017K18), the Suzhou Industrial Application of Basic Research Program (SYG201422), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.

Abstract: The instability and variability of deep reinforcement learning algorithms have an important effect on their performance. Deep Q-Network is the first algorithm to combine deep neural networks with Q-learning successfully. It is proved that deep Q-Network can perform human-level control for handling problems requiring both rich perception of high-dimensional raw inputs and policy control. However, deep Q-Network has the problem of overestimating the action value and such overestimation can degrade the performance of agent. Although double deep Q-Network is proposed to mitigate the impact of overestimation, it still exists the problem of underestimating the value of the action. In some complex reinforcement learning environments, even a small estimation error may have a large impact on the learned policy. In this paper, in order to solve the problem of overestimating the action value in deep Q-Network and the underestimation of the action value in double deep Q-Network, a new deep reinforcement learning framework is proposed-AWDDQN, which integrates the newly proposed weighted double estimator into double deep Q-Network. In order to reduce the estimation error of the target value, the average value of the previously learned action estimation values is calculated to generate a target value and the number of average action values is dynamically determined based on the temporal difference error. The experimental results show that AWDDQN can effectively reduce the bias and can enhance agent’s performance in some Atari 2600 games.