This is the java version of sparkey. It's not a binding, but a full reimplementation. See Sparkey for more documentation on how it works.
Continuous integration with travis.
- Java 6 or higher
- Maven
mvn package
See changelog.
Sparkey is meant to be used as a library embedded in other software.
To import it with maven, use this for Java 8 or higher:
<dependency>
<groupId>com.spotify.sparkey</groupId>
<artifactId>sparkey</artifactId>
<version>3.2.4</version>
</dependency>
Use this for Java 6 or 7:
<dependency>
<groupId>com.spotify.sparkey</groupId>
<artifactId>sparkey</artifactId>
<version>2.3.2</version>
</dependency>
To help get started, take a look at the API documentation or an example usage: SparkeyExample
Apache License, Version 2.0
This data is the direct output from running the benchmarks via IntelliJ, using the JMH plugin
on a machine with Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
Benchmark (numElements) (type) Mode Cnt Score Error Units
LookupBenchmark.test 1000 NONE thrpt 4 4768851.493 Β± 934818.190 ops/s
LookupBenchmark.test 10000 NONE thrpt 4 4404571.185 Β± 952648.631 ops/s
LookupBenchmark.test 100000 NONE thrpt 4 3892021.755 Β± 560657.509 ops/s
LookupBenchmark.test 1000000 NONE thrpt 4 2748648.598 Β± 1345168.410 ops/s
LookupBenchmark.test 10000000 NONE thrpt 4 1921539.397 Β± 64678.755 ops/s
LookupBenchmark.test 100000000 NONE thrpt 4 8576.763 Β± 10666.193 ops/s
LookupBenchmark.test 1000 SNAPPY thrpt 4 776222.884 Β± 46536.257 ops/s
LookupBenchmark.test 10000 SNAPPY thrpt 4 707242.387 Β± 460934.026 ops/s
LookupBenchmark.test 100000 SNAPPY thrpt 4 651857.795 Β± 975531.050 ops/s
LookupBenchmark.test 1000000 SNAPPY thrpt 4 791848.718 Β± 19363.131 ops/s
LookupBenchmark.test 10000000 SNAPPY thrpt 4 700438.201 Β± 27910.579 ops/s
LookupBenchmark.test 100000000 SNAPPY thrpt 4 681790.103 Β± 45388.918 ops/s
Here we see that performance goes down slightly as more elements are added. This is caused by:
- More frequent CPU cache misses.
- More page faults.
If you can mlock the full dataset, the performance should be more predictable.
Benchmark (type) Mode Cnt Score Error Units
AppendBenchmark.testMedium NONE thrpt 4 1595668.598 Β± 850581.241 ops/s
AppendBenchmark.testMedium SNAPPY thrpt 4 919539.081 Β± 205638.008 ops/s
AppendBenchmark.testSmall NONE thrpt 4 9800098.075 Β± 707288.665 ops/s
AppendBenchmark.testSmall SNAPPY thrpt 4 20227222.636 Β± 7567353.506 ops/s
This is mostly disk bound, CPU is not fully utilized. So even though Snappy uses more CPU, it's still faster because there's less data to write to disk, especially when the keys and values are small. (The score is number of appends, not amount of data, so this may be slightly misleading)
Benchmark (constructionMethod) (numElements) Mode Cnt Score Error Units
WriteHashBenchmark.test IN_MEMORY 1000 ss 4 0.003 Β± 0.001 s/op
WriteHashBenchmark.test IN_MEMORY 10000 ss 4 0.009 Β± 0.016 s/op
WriteHashBenchmark.test IN_MEMORY 100000 ss 4 0.035 Β± 0.088 s/op
WriteHashBenchmark.test IN_MEMORY 1000000 ss 4 0.273 Β± 0.120 s/op
WriteHashBenchmark.test IN_MEMORY 10000000 ss 4 4.862 Β± 0.970 s/op
WriteHashBenchmark.test SORTING 1000 ss 4 0.005 Β± 0.033 s/op
WriteHashBenchmark.test SORTING 10000 ss 4 0.014 Β± 0.008 s/op
WriteHashBenchmark.test SORTING 100000 ss 4 0.070 Β± 0.111 s/op
WriteHashBenchmark.test SORTING 1000000 ss 4 0.835 Β± 0.310 s/op
WriteHashBenchmark.test SORTING 10000000 ss 4 13.919 Β± 1.908 s/op
Writing using the in-memory method is faster, but only works if you can keep the full hash file in memory while building it. Sorting is about 3x slower, but is more reliably for very large data sets.