ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2019, Vol. 56 ›› Issue (3): 643-654.doi: 10.7544/issn1000-1239.2019.20180019

• 人工智能 • 上一篇    下一篇

一种自适应的多臂赌博机算法

章晓芳1,2,周倩1,梁斌1,徐进1   

  1. 1(苏州大学计算机科学与技术学院 江苏苏州 215006); 2(计算机软件新技术国家重点实验室 (南京大学) 南京 210023) (xfzhang@suda.edu.cn)
  • 出版日期: 2019-03-01
  • 基金资助: 
    国家自然科学基金项目(61772263,61772014,61572375);苏州市科技发展计划基金项目(SYG201807)

An Adaptive Algorithm in Multi-Armed Bandit Problem

Zhang Xiaofang1,2, Zhou Qian1, Liang Bin1, Xu Jin1   

  1. 1(School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006); 2(State Key Laboratory for Novel Software Technology (Nanjing University), Nanjing 210023)
  • Online: 2019-03-01

摘要: 多臂赌博机问题是强化学习中研究探索和利用两者平衡的经典问题,其中,随机多臂赌博机问题是最经典的一类多臂赌博机问题,是众多新型多臂赌博机问题的基础.针对现有多臂赌博机算法未能充分使用环境反馈信息以及泛化能力较弱的问题,提出一种自适应的多臂赌博机算法.该算法利用当前估计值最小的动作被选择的次数来调整探索和利用的概率(chosen number of arm with minimal estimation, CNAME),有效缓解了探索和利用不平衡的问题.同时,该算法不依赖于上下文信息,在不同场景的多臂赌博机问题中有更好的泛化能力.通过理论分析给出了该算法的悔值(regret)上界,并通过不同场景的实验结果表明:CNAME算法可以高效地获得较高的奖赏和较低的悔值,并且具有更好的泛化能力.

关键词: 强化学习, 多臂赌博机, 探索和利用, 自适应, 上下文相关

Abstract: As an important ongoing field in machine learning, reinforcement learning has received extensive attention in recent years. The multi-armed bandit (MAB) problem is a typical problem of the exploration and exploitation dilemma in reinforcement learning. As a classical MAB problem, the stochastic multi-armed bandit (SMAB) problem is the base of many new MAB problems. To solve the problems of insufficient use of information and poor generalization ability in existing MAB methods, this paper presents an adaptive SMAB algorithm to balance exploration and exploitation based on the chosen number of arm with minimal estimation, namely CNAME in short. CNAME makes use of the chosen times and the estimations of an action at the same time, so that an action is chosen according to the exploration probability, which is updated adaptively. In order to control the decline rate of exploration probability, the parameter w is introduced to adjust the influence degree of feedback during the selection process. Furthermore, CNAME does not depend on contextual information, hence it has better generalization ability. The upper bound of CNAMEs regret is theoretically proved and analyzed. Our experimental results in different scenarios show that CNAME can yield greater reward and smaller regret with high efficiency than commonly used methods. In addition, its generalization ability is very strong.

Key words: reinforcement learning, multi-armed bandit, exploration and exploitation, adaptation, contextual

中图分类号: