News Blog of Dennis Komm

DBLP ORCiD Google Scholar LinkedIN

2022-02-02 more news

Paper “Randomized Online Computation with High Probability Guarantees”

Algorithmica 86(6)
Randomized algorithms for classical (offline) optimization problems are usually assessed by either their expected performance or the performance they achieve with some certain probabilty. Both are closely related due to Markov's inequality: if the algorithm performs well in expectation, repeatedly run it on the same instance and take the best result.

This does not work for online algorithms, because those have one shot only to compute a solution. However, it is indeed possible to use some interesting amplification technique to transform a randomized online algorithm that performs well in expectation to one that performs well with high probability. Basically, you simulate the former and cleverly “restart” it every now and then.

Our paper “Randomized Online Computation with High Probability Guarantees,” in which we do just this, has just been published at Algorithmica (open access). It is actually one of my favorite results.

A huge thanks to my coauthors Rastislav and Richard Královič and Tobias Mömke.