高级检索
    胡裕靖, 高阳, 安波. 不完美信息扩展式博弈中在线虚拟遗憾最小化[J]. 计算机研究与发展, 2014, 51(10): 2160-2170. DOI: 10.7544/issn1000-1239.2014.20130823
    引用本文: 胡裕靖, 高阳, 安波. 不完美信息扩展式博弈中在线虚拟遗憾最小化[J]. 计算机研究与发展, 2014, 51(10): 2160-2170. DOI: 10.7544/issn1000-1239.2014.20130823
    Hu Yujing, Gao Yang, An Bo. Online Counterfactual Regret Minimization in Repeated Imperfect Information Extensive Games[J]. Journal of Computer Research and Development, 2014, 51(10): 2160-2170. DOI: 10.7544/issn1000-1239.2014.20130823
    Citation: Hu Yujing, Gao Yang, An Bo. Online Counterfactual Regret Minimization in Repeated Imperfect Information Extensive Games[J]. Journal of Computer Research and Development, 2014, 51(10): 2160-2170. DOI: 10.7544/issn1000-1239.2014.20130823

    不完美信息扩展式博弈中在线虚拟遗憾最小化

    Online Counterfactual Regret Minimization in Repeated Imperfect Information Extensive Games

    • 摘要: 研究在不完美信息扩展式博弈中对次优对手弱点的利用.针对该领域中一种常用方法——对手建模方法——的不足,提出了从遗憾最小化的角度来利用次优对手弱点的思想,并基于一种离线的均衡计算方法——虚拟遗憾最小化方法——将其扩展到在线博弈的场景中,实现对次优对手弱点的利用.提出了从博弈结果中估计各个信息集的虚拟价值的方法,给出2种估计手段:静态估计法和动态估计法.静态估计法直接从博弈结果的分布中进行估计,并对每个结果给以相等的估计权重;而动态估计法则对新产生的博弈结果给以较高的估计权重,以便快速地适应对手的策略变化.基于2种估计方法,提出在线博弈中虚拟遗憾最小化的算法,并在基于单牌扑克的实验中,与4种在线学习算法(DBBR,MCCFR-os,Q-learning,Sarsa)进行了对比.实验结果显示所提出的算法不仅对较弱对手的利用效果最好,还能在与4种对比算法的比赛中取得最高的胜率.

       

      Abstract: In this paper, we consider the problem of exploiting suboptimal opponents in imperfect information extensive games. Most previous works use opponent modeling and find a best response to exploit the opponent. However, a potential drawback of such approach is that the best response may not be a real one, since the modeled strategy actually may not be the same as what the opponent plays. We try to solve this problem from the perspective of online regret minimization, which avoids opponent modeling. We make extensions to a state-of-the-art equilibrium-computing algorithm called counterfactual regret minimization (CFR). The core problem is how to compute the counterfactual values in online scenarios. We propose to learn approximations of these values from the results produced by the game and introduce two different estimators: static estimator which learns the values directly from the results’ distribution, and dynamic estimator which assigns larger weight to new sampled results than older ones for better adapting to dynamic opponents. Two algorithms for online regret minimization are proposed based on the two estimators. We also give the conditions under which the values estimated by our estimators are equal to the true values, showing the relationship between CFR and our algorithms. Experimental results in one-card poker show that our algorithms not only perform the best when exploiting some weak opponents, but also outperform some state-of-the-art algorithms by achieving the highest win rate in matches with a few hands.

       

    /

    返回文章
    返回