学位论文详细信息
Parametrized Stochastic Multi-armed Bandits with Binary Rewards
multi-armed bandits;machine learning
Jiang, Chong ; Srikant ; Rayadurgam
关键词: multi-armed bandits;    machine learning;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/18352/Jiang_Chong.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

In this thesis, we consider the problem of multi-armed bandits with a large number of correlated arms. We assume that the arms have Bernoulli distributed rewards, independent across arms and across time, where the probabilities of success are parametrized by known attribute vectors for each arm, as well as an unknown preference vector. For this model, we seek an algorithm with a total regret that is sub-linear in time and independent of the number of arms. We present such an algorithm, which we call the Three-phase Algorithm, and analyze its performance. We show an upper bound on the total regret which applies uniformly in time.The asymptotics of this bound show that for any $f \in \omega(\log(T))$, the total regret can be made to be $O(f(T))$, independent of the number of arms.

【 预 览 】
附件列表
Files Size Format View
Parametrized Stochastic Multi-armed Bandits with Binary Rewards 317KB PDF download
  文献评价指标  
  下载次数:1次 浏览次数:2次