Array storage benchmark
Compare the storage speed, retrieval speed and file size for various methods of storing 2D numpy arrays.
Hardware etc
The results here are obtained on a normal desktop PC that's several years old and running Ubuntu and has a SSD for storage. You can easily run the benchmarks on your own PC to get more relevant results. You can also apply it to your own data.
Methods
Name | Description | Fast | Small^ | Portability | Ease of use | Human-readable | Flexible% | Notes |
---|---|---|---|---|---|---|---|---|
Csv~ | comma separated value | β β β | β β β | β β β | β β β | β β | β β | only 2D |
JSON~ | js object notation | β β β | β β β | β β β | β β β ++ | β β | β β | any dim, unequal rows |
b64Enc | base 64 encoding | β β β | β β β | β β β | β β β | β β | β β | more network, not files |
JsonTricks | json-tricks compact | β β β | β β β | β β β | β β β + | β β | β β | many types beyond numpy |
MsgPack | Binary version of json | β β β | β β β | β β β | β β β + | β β | β β | Β |
Pickle~ | python pickle | β β β | β β β | β β β | β β β | β β | β β | any obj, not backw. comp |
Binary~ | pure raw data | β β β | β β β | β β β | β β β | β β | β β | dim & type separately |
NPY | numpy .npy (no pickle) | β β β | β β β | β β β | β β β | β β | β β | with pickle mode OFF |
NPYCompr | numpy .npz | β β β | β β β | β β β | β β β | β β | β β | multiple matrices |
PNG | encoded as png image | β β β | β β β | β β β | β β β ++ | β β | β β | only 2D; for fun but works |
FortUnf | fortran unformatted | β β β | β β β | β β β | β β β + | β β | β β | often compiler dependent |
MatFile | Matlab .mat file | β β β | β β β | β β β | β β β + | β β | β β | multiple matrices |
- ^ Two checks if it's small for dense data, three checks if also for sparse. All gzipped results are small for sparse data.
- % E.g. easily supports 3D or higher arrays, unequal columns, inhomogeneous type columns...
- ~ Also tested with gzip, stats refer to non-gzipped. Gzipped is always much slower to write, a bit slower to read, for text formats it's at least 50% smaller.
- Rating refers to using a semi-popular package (probably scipy), as opposed to only python and numpy.
- ++ Very easy (βββ) with an unpopular and/or dedicated package, but the rating refers to only python and numpy.
You can install all dependencies using pip install -r requirements.pip. csv and NPY were done with numpy; json and compact json (JsonTricks) were done with pyjson_tricks; png was done with imgarray; fortran unformatted and matlab were done with scipy; pickle, base64 and gzipping were done with python built-ins. HDF5 uses h5py (not finished, see issue4). MessagePack uses msgpack-numpy. Seaborn is needed for plotting. You can install all dependencies using pip install requirements.pip
Results
Dense random matrix
Sparse random matrix
99% of values are zero, so compression ratios are very good.
Real data
Scattering probabilities for hydrogen and carbon monoxide (many doubles between 0 and 1, most close to 0). You can easily overwrite this by your own file in testdata.csv.
More methods
Pull requests with other methods (serious or otherwise) are welcome! There might be some ideas in the issue tracker.