ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2014, Vol. 51 ›› Issue (10): 2160-2170.doi: 10.7544/issn1000-1239.2014.20130823

• 人工智能 • 上一篇    下一篇

不完美信息扩展式博弈中在线虚拟遗憾最小化

胡裕靖1,高阳1,安波2   

  1. 1(软件新技术国家重点实验室(南京大学) 南京 210023);2(中国科学院计算技术研究所智能信息处理重点实验室 北京 100190) (huyujing.yujing.hu@gmail.com)
  • 出版日期: 2014-10-01
  • 基金资助: 
    国家自然科学基金项目(61033010,61272065,61472453);广东省自然科学基金项目(S2011020001182);广东省科技计划基金项目(2009B090300450,2010A040303004,2011B040200007)

Online Counterfactual Regret Minimization in Repeated Imperfect Information Extensive Games

Hu Yujing1, Gao Yang1, An Bo2   

  1. 1(State Key Laboratory for Novel Software Technology (Nanjing University), Nanjing 210023); 2(Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Science, Beijing 100190)
  • Online: 2014-10-01

摘要: 研究在不完美信息扩展式博弈中对次优对手弱点的利用.针对该领域中一种常用方法——对手建模方法——的不足,提出了从遗憾最小化的角度来利用次优对手弱点的思想,并基于一种离线的均衡计算方法——虚拟遗憾最小化方法——将其扩展到在线博弈的场景中,实现对次优对手弱点的利用.提出了从博弈结果中估计各个信息集的虚拟价值的方法,给出2种估计手段:静态估计法和动态估计法.静态估计法直接从博弈结果的分布中进行估计,并对每个结果给以相等的估计权重;而动态估计法则对新产生的博弈结果给以较高的估计权重,以便快速地适应对手的策略变化.基于2种估计方法,提出在线博弈中虚拟遗憾最小化的算法,并在基于单牌扑克的实验中,与4种在线学习算法(DBBR,MCCFR-os,Q-learning,Sarsa)进行了对比.实验结果显示所提出的算法不仅对较弱对手的利用效果最好,还能在与4种对比算法的比赛中取得最高的胜率.

关键词: 扩展式博弈, 不完美信息, 遗憾最小化, 虚拟遗憾最小化, 静态估计法, 动态估计法

Abstract: In this paper, we consider the problem of exploiting suboptimal opponents in imperfect information extensive games. Most previous works use opponent modeling and find a best response to exploit the opponent. However, a potential drawback of such approach is that the best response may not be a real one, since the modeled strategy actually may not be the same as what the opponent plays. We try to solve this problem from the perspective of online regret minimization, which avoids opponent modeling. We make extensions to a state-of-the-art equilibrium-computing algorithm called counterfactual regret minimization (CFR). The core problem is how to compute the counterfactual values in online scenarios. We propose to learn approximations of these values from the results produced by the game and introduce two different estimators: static estimator which learns the values directly from the results’ distribution, and dynamic estimator which assigns larger weight to new sampled results than older ones for better adapting to dynamic opponents. Two algorithms for online regret minimization are proposed based on the two estimators. We also give the conditions under which the values estimated by our estimators are equal to the true values, showing the relationship between CFR and our algorithms. Experimental results in one-card poker show that our algorithms not only perform the best when exploiting some weak opponents, but also outperform some state-of-the-art algorithms by achieving the highest win rate in matches with a few hands.

Key words: extensive games, imperfect information, regret minimization, counterfactual regret minimization, static estimator, dynamic estimator

中图分类号: