We consider unconstrained randomized optimization of convex objective
functions. We analyze the Random Pursuit algorithm, which iteratively
computes an approximate solution to the optimization problem by
repeated optimization over a randomly chosen one-dimensional
subspace. This randomized method only uses zeroth-order information
about the objective function and does not need any problem-specific
parametrization. We prove convergence and give convergence rates for
smooth objectives assuming that the one-dimensional optimization can
be solved exactly or approximately by an oracle. A convenient property
of Random Pursuit is its invariance under strictly monotone
transformations of the objective function. It thus enjoys identical
convergence behavior on a wider function class. To support the
theoretical results we present extensive numerical performance results
of Random Pursuit, two gradient-free algorithms recently proposed by
Nesterov, and a classical adaptive step-size random search scheme. We
also present an accelerated heuristic version of the Random Pursuit
algorithm which significantly improves standard Random Pursuit on all
numerical benchmark problems. A general comparison of the experimental
results reveals that (i) standard Random Pursuit is effective on
strongly convex functions with moderate condition number, and (ii) the
accelerated scheme is comparable to Nesterov's fast gradient method
and outperforms adaptive step-size strategies. The appendix contains
additional supporting online material.