Abstract:
The instability and variability of deep reinforcement learning algorithms have an important effect on their performance. Deep Q-Network is the first algorithm to combine deep neural networks with Q-learning successfully. It is proved that deep Q-Network can perform human-level control for handling problems requiring both rich perception of high-dimensional raw inputs and policy control. However, deep Q-Network has the problem of overestimating the action value and such overestimation can degrade the performance of agent. Although double deep Q-Network is proposed to mitigate the impact of overestimation, it still exists the problem of underestimating the value of the action. In some complex reinforcement learning environments, even a small estimation error may have a large impact on the learned policy. In this paper, in order to solve the problem of overestimating the action value in deep Q-Network and the underestimation of the action value in double deep Q-Network, a new deep reinforcement learning framework is proposed-AWDDQN, which integrates the newly proposed weighted double estimator into double deep Q-Network. In order to reduce the estimation error of the target value, the average value of the previously learned action estimation values is calculated to generate a target value and the number of average action values is dynamically determined based on the temporal difference error. The experimental results show that AWDDQN can effectively reduce the bias and can enhance agent’s performance in some Atari 2600 games.