Benchmarks

The following tables show benchmarks that compare Ripserer's performance with Ripser and Cubical Ripser. The benchmarking code and more info about the datasets is available here.

All benchmarks were performed on a laptop with an Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz with 8GB of RAM.

We used BenchmarkTools.jl to perform the timing benchmarks and Valgrind's Massif tool to measure peak heap sizes (i.e. total memory footprint).

All benchmarks include I/O, so there might be some noise in the smaller benchmarks. For Ripserer, CSV.jl was used to read the files. Ripserer's memory footprint benchmarks include the Julia runtime, which tends to use around 100MiB memory.

The benchmarks were performed with Ripserer v0.14.3 and master versions of Ripser (commit hash 286d369) and Cubical Ripser (commit hashes 6edb9c5 for 2D and a063dac for 3D).

Vietoris-Rips Persistent Homology

In this experiment, we performed benchmarks with the datasets presented in the Ripser article. We only used the datasets that we were able to run with less than 8GB memory. ripserer_s denotes ripserer run with SparseRips. All datasets were parsed as Float32 as that is what Ripser supports.

The timing results are in the table below. The ratio column shows the better ratio.

datasetsizedim_maxthresholdripsererripserer_sripserratio
random165028.1ms9.3ms10.2 ms0.80
o3_1024102431.84.3s3.1s3.1 s1.01
o3_4096409631.4143.2s76.3s75.6 s1.01
fract-r512222.4s25.4s19.6 s1.14
sphere_319221.6s1.8s1.5 s1.04
dragon200013.2s3.8s2.8 s1.12

The next table shows peak heap sizes as reported by valgrind.

datasetsizedim_maxthresholdripsererripserer_sripser
random16502111.1 MiB111.1 MiB1.1 MiB
o3_1024102431.8374.1 MiB418.2 MiB151.0 MiB
o3_4096409631.44.7 GiB4.9 GiB4.1 GiB
fract-r51222.2 GiB2.2 GiB2.0 GiB
sphere_31922287.0 MiB288.5 MiB209.5 MiB
dragon20001316.7 MiB350.4 MiB296.8 MiB

Alpha-Rips Persistent Homology

Here, we used the 1-skeleton of the Delaunay triangulation of a point cloud as a sparse distance matrix. This benchmark is intended to benchmark performance in very sparse cases, as triangulations only have $\mathcal{O}(n)$ $d$-simplices. All datasets were parsed as Float32 as that is what Ripser supports.

Timing results:

datasetsizedim_maxripsererripserratio
torus100002937ms1.2s0.79
3_sphere30003623ms809ms0.77
4_sphere200045.9s6.3s0.94
5_sphere1000549.7s47.6s1.04
dragon2000258ms76.3s0.76

Peak heap size:

datasetsizedim_maxripsererripser
torus100002138.4 MiB33.2 MiB
3_sphere30003130.0 MiB27.7 MiB
4_sphere20004387.2 MiB202.0 MiB
5_sphere100052.4 GiB1.5 GiB
dragon20002110.9 MiB33.2 MiB

Cubical Persistent Homology

In this benchmark, we use some of the datasets presented in the Cubical Ripser article. We limited the 2D image size to 1999×999 as the current master (commit hash 6edb9c5) version of 2D Cubical Ripser throws an assertion error for anything larger. We were also unable to perform 3D 256×256×256 image benchmarks due to Ripserer running out of memory. The eltype of all datasets is Float64, because that is what Cubical Ripser supports. When running Ripserer in the real world, it's a good idea to use the native data types. This will slightly reduce the memory footprint and increase performance.

Timing results:

datasetsizeripserercubical_ripserratio
lena1999×9993.8 s2.0 s1.90
bonsai128×128×12833.1 s15.0 s2.21
lena512×512842.6 ms295.9 ms2.85
head128×128×12826.0 s12.8 s2.03
bonsai64×64×643.0 s3.0 s1.01

Peak heap size:

datasetsizeripserercubical_ripser
lena512×512145.0 MiB49.3 MiB
lena1999×999514.4 MiB186.7 MiB
bonsai64×64×64280.6 MiB1.3 GiB
bonsai128×128×1281.5 GiB1.9 GiB
head128×128×1281.5 GiB1.9 GiB