Skip to Content.
Sympa Menu

cgal-discuss - [cgal-discuss] Delaunay 3D triangulation of points on the unit sphere

Subject: CGAL users discussion list

List archive

[cgal-discuss] Delaunay 3D triangulation of points on the unit sphere


Chronological Thread 
  • From: Joao Dinis <>
  • To: CGAL discussion list <>
  • Subject: [cgal-discuss] Delaunay 3D triangulation of points on the unit sphere
  • Date: Fri, 25 Nov 2011 11:06:02 +0000


Hi there,

I've been comparing the running times of computing the Delaunay
triangulation of points on the surface of a unit radius sphere.
My interest is to retrieve the convex-hull of those points which is a by
product of the triangulation.
A simple 3D triangulation would do, but the Delaunay triangulation has
the bonus of being a dynamic data structure.

Anyway, I was a bit surprised to observe that computing the Delaunay
triangulation took around 100x more than a simple 3D triangulation.
(I'm using random data sets with sizes varying from 10^3 to 10^6.)

Of course, this data sets are degenerate, with most of points being
co-spherical (maybe not all, because of arithmetic rounding).
So, the immediate solution that pops out is to add an extra point at the
centre (assuming it will be interior to the hull).
This trick does, in fact, reduced the running-times, but only by a
factor of two; still being 50x slower than a simple 3D triangulation.
My guess is that one single interior point is not enough to break the
data set degeneracy.

Further (extensive) tests led me to find out that it is possible to
greatly reduce run-times by adding, not one, but a lot of points near
the origin.
More exactly, adding an extra core data set of 6*sqrt(n) random
co-spherical points, on a sphere with a radius of 0.1 (so, 1/10 of the
unit sphere), each one slightly shifted by a random amount of 0.01 (so,
1/10 of core radius, randomly shifted in each axis).
The critical threshold appears to be the above 6 * sqrt(n), n being the
initial number of points. Less seams to be not enough, more does not
improve further.

Below is an example of run-times for a random data set of n = 256K.

Delaunay triang. -> 70.6 seconds
Delaunay triang. + point at the origin -> 45.4 seconds
Delaunay triang. + extra 6*sqrt(n) points -> 1.5 seconds
Simple 3D triangulation -> 1.0 seconds

The improvement from the extra core data set are huge.
But I was expecting to get the same improvement with a single point at
the origin.

I am doing something wrong?
Is this behavior to be expected?

For the record, I'm doing the insertions all in one go (to benefit from
spatial sorting) and compiling in release mode (gcc -O3 -DNDEBUG) with
an Intel Xeon at 3GHz, running on a 64-bit Linux.


Thanks in advance.
Best regards,

Joao Dinis




































Archive powered by MHonArc 2.6.16.

Top of Page