Σεμινάριο Οδυσσέας Καναβέτας
ΚΥΚΛΟΣ ΣΕΜΙΝΑΡΙΩΝ ΣΤΑΤΙΣΤΙΚΗΣ ΜΑΡΤΙΟΣ 2020
Οδυσσέας Καναβέτας, Leiden University, Mathematical Institute
Optimal data driven policies under constrained multi-armed bandit observations
ΑΙΘΟΥΣΑ Τ105, ΤΡΟΙΑΣ 2, ΝΕΟ ΚΤΙΡΙΟ ΟΠΑ
After a brief review of the multi-armed bandit (MAB) problem and its online machine learning applications, we present our work on the model with side constraints. The constraints represent circumstances in which bandit activations are restricted by the availability of certain resources that are replenished at a constant rate.
We consider the class of feasible uniformly fast (f-UF) convergent policies, that satisfy sample path wise the constraints. We first establish a necessary asymptotic lower bound for the rate of increase of the regret (i.e., loss due to the need to estimate unknown parameters) function of f-UF policies. Then, under pertinent conditions, we establish the existence of asymptotically optimal policies by constructing a class of f-UF policies that achieve this lower bound.
We provide the explicit form of such policies for cases in which the unknown distributions are a) Normal with unknown means and known variances, b) Normal distributions with unknown means and unknown variances and c) arbitrary discrete distributions with finite support.