Clarkson's Algorithm for Polyhedral Representation: Two Experiments

This page reports two types of computational experiments to compare two algorithms, LP (naive) algorithm and LP+Clarkson algorithm, to remove redundancies in representations of convex polyhedra in general dimension. We will see how Clarkson's idea spectacularly improves the naive algorithm for highly redundant random inputs.

There are two representations of convex polyhedra, V-representation and H-representation.

As a V-representation, let Q be a given set of n points in R^d. To represent conv(Q), we only need its extreme points, thus to remove non-essential (redundant) points is a fundamental problem in computational geometry and mathematics in general.

A straight forward algorithm is to solve n LPs (linear programs) of O(n) constraints in d variables. Clarkson found a much more efficient algorithm that solves the same problem by solving n LPs of O(s) constraints in d variables. Here s is the number of essential (extreme) points. Thus the size of LPs to be solved is much smaller if s << n. We call s/n the essential ratio, and 100 x s/n the essential percentage.

If you are not familiar with Clarkson's algorithm, the textbook on Polyhedral Computation (Chapter 7) gives a detailed account of the algorithm and its complexity.

What would be a practical impact of Clarkson's algorithm? To answer this question, we have performed a preliminary experiment.

Here are important remarks on the experiment.

  1. We have two types of random data that have a large amount of redundancy. The first type UCUBE is the class of uniformly random n (10k, 20k, 40k) points in the hypercube of dimension d=4, 5, 6. The second type DUCUBE (double uniformly-random cube) is the class of union of 2 UCUBEs where one cube is twice as wide as the other. This second class ensures that the redundancy is much higher than UCUBE for the same d and n.
  2. We use integer points only to ensure that we can do exact (rational) computation on the data. We select random integers between -100 and 100.
  3. We use both floating-point and exact-rational arithmetics for the comparison.
  4. We use my implementations which are included in cddlib-0.94m. Note that earlier cddlib distributions contain (serious) bugs found by Mathieu Dutour Sikirić.
  5. We report only the CPU time in seconds. Note that Clarkson's algorithm does not require extra storage.
  6. All computations were performed on MacBook Pro (2 GHz Quad-Core Intel Core i5). Anyone interested in performing the same experiment is welcome. All input data files can be found in UcubeData . If you wish to get the whole directory as a compressed archive, just move to its parent directory where UcubeData.tar.gz file resides. See Instructions for doing the experiments by yourself.

Float Exact (Rational)
LP algorithm LP+Clarkson Speedup LP algorithm LP+Clarkson Speedup
ucube10k_4d (4%ess) 22s 2s 11 224s 25s 9
ucube20k_4d (2%ess) 83s 3s 28 842s 50s 17
ucube40k_4d (1%ess) 331s 6s 55 3124s 118s 26
ucube10k_5d (9%ess) 32s 4s 8 295s 70s 4
ucube20k_5d (6%ess) 118s 11s 11 1068s 169s 6
ucube40k_5d (3%ess) 465s 24s 19 3902s 403s 10
ucube10k_6d (18%ess) 43s 11s 4 399s 151s 3
ucube20k_6d (13%ess) 175s 29s 6 1463s 401s 4
ucube40k_6d (9%ess) 699s 84s 8 5854s 1060s 6
ducube10k_4d (0.6%ess) 17s 0s * 227s 5s 45
ducube20k_4d (0.4%ess) 67s 1s 67 981s 15s 65
ducube40k_4d (0.3%ess) 327s 3s 109 3794s 46s 82
ducube10k_5d (0.8%ess) 19s 0s * 275s 9s 31
ducube20k_5d (0.6%ess) 94s 2s 47 1071s 25s 43
ducube40k_5d (0.4%ess) 409s 4s 102 4553s 65s 70
ducube10k_6d (1.3%ess) 23s 1s 23 312s 12s 26
ducube20k_6d (0.8%ess) 103s 2s 52 1302s 34s 38
ducube40k_6d (0.7%ess) 449s 7s 64 4969s 106s 47

For higher dimensional (27d and 35d) experiment, go to Ctype Experiment.

Written by Komei Fukuda (ETH Zurich, Switzerland) return
Last updated: January 26, 2021