💯
BaseEncode things into Emoji.
Base
Usage
$ echo "the quick brown fox jumped over the lazy dog" | base100
👫👟👜🐗👨👬👠👚👢🐗👙👩👦👮👥🐗👝👦👯🐗👡👬👤👧👜👛🐗👦👭👜👩🐗👫👟👜🐗👣👘👱👰🐗👛👦👞🐁
Base--decode
is specified; the --encode
flag does nothing and exists
solely to accommodate lazy people who don't want to read the docs (like me).
USAGE:
base100 [FLAGS] [input]
FLAGS:
-d, --decode Tells base💯 to decode this data
-e, --encode Tells base💯 to encode this data
-h, --help Prints help information
-V, --version Prints version information
ARGS:
<input> The input file to use
Installation
To install base
$ cargo install base100
base
$ RUSTFLAGS="-C target-cpu=native" cargo install base100 --features simd
Performance
base
Scalar Performance
$ base100 --version
base💯 0.4.1
$ base64 --version
base64 (GNU coreutils) 8.28
$ cat /dev/urandom | pv | base100 > /dev/null
[ 247MiB/s]
$ cat /dev/urandom | pv | base64 > /dev/null
[ 232MiB/s]
$ cat /dev/urandom | pv | base100 | base100 -d > /dev/null
[ 233MiB/s]
$ cat /dev/urandom | pv | base64 | base64 -d > /dev/null
[ 176MiB/s]
In both scenarios, base
SIMD Performance
On a machine supporting AVX2, base
To receive this speedup: you must use:
- An AVX2-capable processor (Newer than Broadwell, or Zen)
- Nightly Rust
To build the SIMD-accelerated version, simply go to your project directory and type
$ RUSTFLAGS="-C target-cpu=native" cargo +nightly build --release --features simd
Please note that the below benchmarks were taken on a significantly weaker machine than the above benchmarks, and cannot be directly compared.
$ base100 --version
base💯 0.4.1
$ base64 --version
base64 (GNU coreutils) 8.28
$ cat /dev/zero | pv | ./base100 > /dev/null
[1.14GiB/s]
$ cat /dev/zero | pv | base64 > /dev/null
[ 479MiB/s]
$ cat /dev/zero | pv | ./base100 | ./base100 -d > /dev/null
[ 412MiB/s]
$ cat /dev/zero | pv | base64 | base64 -d > /dev/null
[ 110MiB/s]
In this scenario, base
Caveats
Base
Future plans
- Allow data to be encoded with the full 1024-element emoji set
- Add further optimizations and ensure we're abusing SIMD as much as possible
- Add multiprocessor support