next up previous contents
Next: How many facets does Up: Convex Polyhedron Previous: What computer models are   Contents


How do we measure the complexity of a convex hull algorithm?

To answer this question, we assume the unit cost RAM model, where the computational time is essentially the number of elementary arithmetic operations and the storage for any integer number takes a unit space. See Section 2.14.

There are two approaches to evaluate the complexity of a given convex hull algorithm.

Let $ \alpha$ be an algorithm which computes a minimal inequality description $ P=\{ x : A x \le b \}$ of a full-dimensional convex polytope $ P=conv(S)$ for a given point set $ S$ in $ R^d$ with $ n=\vert S\vert$. Let $ m$ denote the number of inequalities in the output $ A x \le b$.

(One can interprete the discussion here in dual setting: consider $ \alpha$ as an algorithm to compute all vertices $ S'$ of a convex polytope $ P=\{x : A' x \le b' \}$ with $ n$ inequaities with $ m$ vertices.)

First of all, most people agree that the efficiency of computing the convex hull should be measured at least by the critical input parameters $ d$ and $ n$. Some people like to see the complexity by fixing $ d$ to constant, but it is always better to evaluate in terms of $ d$ as well, and fix it later.

The first measure, often employed by computational geometers, is to bound the worst case running time of an algorithm $ \alpha$ for any input with $ n$ points in $ R^d$. For example, if $ \alpha$ is of $ O(d! \; n^d)$, then it means $ \alpha$ terminates in time $ O(d! \; n^d)$ for ANY input of $ n$ points in dimension $ d$. Also, when one set $ d$ to be fixed (constant), such an algorithm is said to have time complexity $ O(n^d)$, since $ d!$ is simply a constant. We may call this worst-case-input measure. For fixed dimension, there is an optimum algorithm [Cha93] for the convex hull in terms of the worst-case-input measure, that runs in time $ O(n^{\lfloor d/2 \rfloor})$ for $ d\ge 4$. It cannot be better because the largest output is of the same order by the upper bound theorem (Theorem 2).

The worst-case-input measure is quite popular, but it might be little misleading. For example, suppose algorithms $ \alpha$ and $ \beta$ are of time complexity $ O(n^d)$ and $ O(n^{2d})$, respectively. Then by this measurement, the algorithm $ \alpha$ is superior to $ \beta$.

Here is a potentially serious problem with this worst-case-input measure. Above, it is still possible that $ \alpha$ takes worst-case time $ n^d$ for ALL input of $ n$ points in $ R^d$, and $ \beta$ takes time proportional to some polynomial function of $ n, d, m$. Note that the number $ m$ of inequalities varies wildly from $ O(1)$ to $ O(n^{\lfloor d/2 \rfloor})$, even for fixed $ d$ (by the upper bound theorem Theorem 2 and (1)). This diversity is just too big to be ignored if $ d\ge 4$. Furthermore, the input data leading to the worst-case output hardly occurs in practice. In fact, for the random spherical polytope, the expected size of $ m$ is linear in $ n$, see Section 2.16. While the worst-case-input optimal algorithm [Cha93] is a remarkable theoretical achievement, we are still very far from knowing the best ways to compute the convex hull for general dimensions.

In order to circumvent this pitfall, one can use a measure using all key variables $ d, n, m$. Or more generally, one can measure the time complexity in terms of both the size of input and the size of output. We say an algorithm $ \alpha$ is polynomial if it runs in time bounded by a polynomial in $ d, n, m$. This polynomiality coincides with the usual polynomiality when the output size is polynomially bounded by the size of input.

Under the nondegeneracy assumption (see 2.12), there is a polynomial algorithm for the convex hull problem. Few of the earlier polynomial algorithms are pivot-based algorithms [CCH53,Dye83] solving the problem in dual form (the vertex enumeration problem) and a wrapping algorithm [CK70]. A more recent algorithm [AF92] based on reverse search technique [AF96] is not only polynomial but compact at the same time. Here, we say an algorithm is compact if its space complexity is polynomial in the input size only.

In the general case, there is no known polynomial algorithm. The paper [ABS97] is an excellet article presenting how various algorithms fail to be polynomial, through ingenious constructions of ``nasty'' polytopes.


next up previous contents
Next: How many facets does Up: Convex Polyhedron Previous: What computer models are   Contents
Komei Fukuda 2004-08-26