ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2020, Vol. 57 ›› Issue (3): 576-589.doi: 10.7544/issn1000-1239.2020.20190159

Previous Articles     Next Articles

Averaged Weighted Double Deep Q-Network

Wu Jinjin1, Liu Quan1,2,3,4, Chen Song1, Yan Yan1   

  1. 1(School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006);2(Key Laboratory of Symbolic Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun 130012);3(Jiangsu Key Laboratory of Computer Information Processing Technology (Soochow University), Suzhou, Jiangsu 215006);4(Collaborative Innovation Center of Novel Software Technology and Industrialization (Nanjing University), Nanjing 210023)
  • Online:2020-03-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61772355, 61702055, 61502323, 61502329), the Jiangsu Provincial Natural Science Research University Major Projects (18KJA520011, 17KJA520004), the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (Jilin University) (93K172014K04, 93K172017K18), the Suzhou Industrial Application of Basic Research Program (SYG201422), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.

Abstract: The instability and variability of deep reinforcement learning algorithms have an important effect on their performance. Deep Q-Network is the first algorithm to combine deep neural networks with Q-learning successfully. It is proved that deep Q-Network can perform human-level control for handling problems requiring both rich perception of high-dimensional raw inputs and policy control. However, deep Q-Network has the problem of overestimating the action value and such overestimation can degrade the performance of agent. Although double deep Q-Network is proposed to mitigate the impact of overestimation, it still exists the problem of underestimating the value of the action. In some complex reinforcement learning environments, even a small estimation error may have a large impact on the learned policy. In this paper, in order to solve the problem of overestimating the action value in deep Q-Network and the underestimation of the action value in double deep Q-Network, a new deep reinforcement learning framework is proposed-AWDDQN, which integrates the newly proposed weighted double estimator into double deep Q-Network. In order to reduce the estimation error of the target value, the average value of the previously learned action estimation values is calculated to generate a target value and the number of average action values is dynamically determined based on the temporal difference error. The experimental results show that AWDDQN can effectively reduce the bias and can enhance agent’s performance in some Atari 2600 games.

Key words: deep reinforcement learning, deep Q-network, estimation error, weighted double estimator, temporal difference

CLC Number: