Department of Computer Science | Institute of Theoretical Computer Science

Random sampling is an important tool in optimization subject to
finitely or infinitely many constraints. Here we are interested in
obtaining solutions of low cost that violate only few
constraints. Under convexity or similar favorable conditions, and
assuming fixed dimension, one can indeed derive combinatorial bounds
on the expected number (or probability mass) of constraints violated
by the optimal solution subject to a (small) random sample of
constraints. The cost of the sample solution, however, cannot be
bounded combinatorially.
Suppose that after picking the sample, we remove k constraints from it
in such a way that the best improvement of objective function is
achieved. Can we still guarantee (good) bounds on the number or mass
of violated constraints? This question was asked and partially
answered in the framework of chance-constrained optimization.
Here we study the question in the completely combinatorial setting
LP-type problems, which allows a unified treatment of many different
optimization problems. Given a nondegenerate LP-type problem of
combinatorial dimension \delta, we consider sampling with subsequent
removal of the k constraints that lead to the best improvement of
objective function. We show that this induces an LP-type problem of
combinatorial dimension O(\delta^{k+1}), and that this bound is tight in the
worst case. It follows that through this kind of removal, the expected
number of violated constraints blows up by a factor of at most O(\delta^k);
for relevant sample sizes, however, a matching lower bound for the
blowup factor is not implied. At the same time, our results show that
further improvements require new tools that go beyond combinatorial
dimension.