ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2015, Vol. 52 ›› Issue (12): 2764-2775.doi: 10.7544/issn1000-1239.2015.20148160

• 人工智能 • 上一篇    下一篇

一种近似模型表示的启发式Dyna优化算法

钟珊1,2, 刘全1,3,5, 傅启明1,4, 章宗长1, 朱斐1, 龚声蓉1,2   

  1. 1(苏州大学计算机科学与技术学院 江苏苏州 215006); 2(常熟理工学院计算机科学与工程学院 江苏常熟 215500); 3(软件新技术与产业化协同创新中心 南京 210000); 4(苏州科技学院电子与信息工程学院 江苏苏州 215006); 5(符号计算与知识工程教育部重点实验室(吉林大学) 长春 130012) (sunshine-620@163.com)
  • 出版日期: 2015-12-01
  • 基金资助: 
    国家自然科学基金项目(61272005,61303108,61373094,61472262,61502323,61502329);江苏省自然科学基金项目(BK2012616);江苏省高校自然科学研究项目(13KJB520020);吉林大学符号计算与知识工程教育部重点实验室基金项目(93K172014K04);苏州市应用基础研究计划项目(SYG201422)

A Heuristic Dyna Optimizing Algorithm Using Approximate Model Representation

Zhong Shan1,2, Liu Quan1,3,5, Fu Qiming1,4, Zhang Zongzhang1, Zhu Fei1, Gong Shengrong1,2   

  1. 1(School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006); 2(School of Computer Science and Engineering, Changshu Institute of Technology, Changshu, Jiangsu 215500); 3(Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210000); 4(College of Electronic & Information Engineering, Suzhou University of Science and Technology, Suzhou, Jiangsu 215006); 5(Key Laboratory of Symbol Computation and Knowledge Engineering (Jilin University), Ministry of Education, Changchun 130012)
  • Online: 2015-12-01

摘要: 针对基于查询表的Dyna优化算法在大规模状态空间中收敛速度慢、环境模型难以表征以及对变化环境的学习滞后性等问题,提出一种新的基于近似模型表示的启发式Dyna优化算法(a heuristic Dyna optimization algorithm using approximate model representation, HDyna-AMR),其利用线性函数近似逼近Q值函数,采用梯度下降方法求解最优值函数.HDyna-AMR算法可以分为学习阶段和规划阶段.在学习阶段,利用agent与环境的交互样本近似表示环境模型并记录特征出现频率;在规划阶段,基于近似环境模型进行值函数的规划学习,并根据模型逼近过程中记录的特征出现频率设定额外奖赏.从理论的角度证明了HDyna-AMR的收敛性.将算法用于扩展的Boyan chain问题和Mountain car问题.实验结果表明,HDyna-AMR在离散状态空间和连续状态空间问题中能学习到最优策略,同时与Dyna-LAPS(Dyna-style planning with linear approximation and prioritized sweeping)和Sarsa(λ)相比,HDyna-AMR具有收敛速度快以及对变化环境的近似模型修正及时的优点.

关键词: 强化学习, 模型学习, 规划, 函数逼近, 机器学习

Abstract: In allusion to the problems of reinforcement learning with Dyna-framework, such as slow convergence and inappropriate representation of the environment model, delayed learning of the changed environment and so on, this paper proposes a novel heuristic Dyna optimization algorithm based on approximate model—HDyna-AMR, which approximates Q value function via linear function, and solves the optimal value function by using gradient descent method. HDyna-AMR can be divided into two phases, such as the learning phase and the planning phase. In the former one, the algorithm approximately models the environment by interacting with the environment and records the feature appearing frequency, while in the latter one, the approximated environment model can be used to do the planning with some extra rewards according to the feature appearing frequency. Additionally, the paper proves the convergence of the proposed algorithm theoretically. Experimentally, we apply HDyna-AMR to the extended Boyan Chain problem and Mountain Car problem, and the results show that HDyna-AMR can get the approximately optimal policy in both discrete and continuous state space. Furthermore, compared with Dyna-LAPS (Dyna-style planning with linear approximation and prioritized sweeping) and Sarsa(λ), HDyna-AMR outperforms Dyna-LAPS and Sarsa(λ) in terms of convergence rate, and the robustness to the changed environment.

Key words: reinforcement learning (RL), model learning, planning, function approximation, machine learning

中图分类号: