William Greenall Seminar
AUEB STATISTICS SEMINAR SERIES MARCH 2021
William Greenall (PhD student, UCL. Supervisor: Petros Dellaportas)
Practical Distributionally Robust Markov Decision Processes using Relative Entropy
Distributionally Robust Markov Decision Processes offer a toolset for improving performance of sequential optimisation algorithms in the face of poor or particularly uncertain estimates of a transition model. The literature has focused on the use of Wasserstein distances as a tool for regulating the extent of robustness, but is not simple to use due to its lack of closed forms. On the other hand, the Kullback-Leibler divergence has been shunned as its use has heretofore implied limited flexibility. I present a method to render the KL-divergence a useful and practical tool for the construction of ambiguity sets, and build both discrete-state and continuous-state space decision processes using the formulation.
(Presentation slides can be found here)