-
Pure Exploration
We look at structured active sequential hypothesis testing problems. We are interested in quantifying how many samples are necessary, and in finding near-optimal sampling strategies. One driving example is (stochastic) game tree search.
-
Learning Faster from Easy Data
In online learning algorithms are typically analysed in the worst case. However, not all data are created equal. We aim to quantify the difficulty of the data, and design efficient algorithms that adapt to it automatically.
-
Robust Statistics
We investigate ways to make reliable inferences with the help of a statistical model, but without assuming that the data are generated from this model.
-
Online Learning Beyond Experts
We investigate tradeoffs between predictive performance and computational efficiently by considering online learning of combinatorial and matrix concept classes.
-
Minimax Square Loss Prediction
The squared Euclidean distance is one of the widely used loss functions in machine learning. We study prediction games with square loss using a pure minimax approach. The goal is to obtain efficient algorithms for natural tasks.
-
Online Finance
We consider worst-case approaches to sequential investment. We obtain interesting guarantees that can be achieved efficiently.
-
Switching
Patterns in data often change with time. We take a worst-case approach to designing adaptive algorithms to deal with such non-stationarity.
-
Non-uniform regret analysis
We study ways to incorporate prior expectations about the environment in the form of data-dependent luckiness guarantees.
-
Algorithmic Information Theory
We investigate interesting variations of the notion of Kolmogorov complexity.
-
Open Problems
Collection of open problems.