miniqueue
A stupid simple, single binary message queue using HTTP/2 or Redis Protocol.
Most messaging workloads don't require enormous amounts of data, endless features or infinite scaling. Instead, they'd probably be better off with something dead simple.
miniqueue is just that. A ridiculously simple, high performance queue. You can publish bytes to topics and be sure that your consumers will receive what you published, nothing more.
Features
- Redis Protocol Support
- Simple to run
- Very fast, see benchmarks
- Not infinitely scalable
- Multiple topics
- HTTP/2
- Publish
- Subscribe
- Acknowledgements
- Persistence
- Prometheus metrics [WIP]
API
Redis
You can communicate with miniqueue using any major Redis library which supports custom commands. The command set is identical to the HTTP/2 implementation and listed under the commands heading.
Examples of using the Redis interface can be found in the redis_test.go file.
HTTP/2
-
POST
/publish/:topic
, where the body contains the bytes to publish to the topic.curl -X POST https://localhost:8080/publish/foo --data "helloworld"
-
POST
/subscribe/:topic
- streams messages separated by\n
client → server: "INIT"
server → client: { "msg": [base64], "error": "...", dackCount: 1 }
client → server: "ACK"
-
DELETE
/:topic
- deletes the given topic, removing all messages. Note, this is an expensive operation for large topics.
You can also find examples in the ./examples/
directory.
Usage
miniqueue runs as a single binary, persisting the messages to the filesystem in
a directory specified by the -db
flag and exposes an HTTP/2 server on the port
specified by the -port
flag.
Note: As the server uses HTTP/2, TLS is required. For testing, you can
generate a certificate using mkcert and
replace the ones in ./testdata
as these will not be trusted by your client, or
specify your own certificate using the -cert
and -key
flags.
Usage of ./miniqueue:
-cert string
path to TLS certificate (default "./testdata/localhost.pem")
-db string
path to the db file (default "./miniqueue")
-human
human readable logging output
-key string
path to TLS key (default "./testdata/localhost-key.pem")
-level string
(disabled|debug|info) (default "debug")
-period duration
period between runs to check and restore delayed messages (default 1s)
-port int
port used to run the server (default 8080)
Once running, miniqueue will expose an HTTP/2 server capable of bidirectional
streaming between client and server. Subscribers will be delivered incoming
messages and can send commands ACK
, NACK
, BACK
etc. Upon a
subscriber disconnecting, any outstanding messages are automatically NACK
'ed
and returned to the front of the queue.
Messages sent to subscribers are JSON encoded, containing additional information in some cases to enable certain features. The consumer payload looks like:
{
"msg": "dGVzdA==", // base64 encoded msg
"dackCount": 2, // number of times the msg has been DACK'ed
}
In case of an error, the payload will be:
{
"error": "uh oh, something went wrong"
}
To get you started, here are some common ways to get up and running with miniqueue
.
Start miniqueue with human readable logs
λ ./miniqueue -human
Start miniqueue with custom TLS certificate
λ ./miniqueue -cert ./localhost.pem -key ./localhost-key.pem
Start miniqueue on custom port
λ ./miniqueue -port 8081
Docker
As of v0.7.0
there are published miniqueue docker images available in the
Docker hub repository
tomarrell/miniqueue
.
It is recommended to use a tagged release build. The tag latest
tracks the
master
branch.
With the TLS certificate and key in a relative directory ./certs
(can be
generated using mkcert).
./certs
├── localhost-key.pem
└── localhost.pem
You can execute the following Docker command to run the image.
$ docker run \
-v $(pwd)/certs:/etc/miniqueue/certs \
-p 8080:8080 \
tomarrell/miniqueue:v0.7.0 \
-cert /etc/miniqueue/certs/localhost.pem \
-key /etc/miniqueue/certs/localhost-key.pem \
-db /var/lib/miniqueue \
-human
Examples
To take a look at some common usage, we have compiled some examples for
reference in the ./examples/
directory. Here you will find
common patterns such as:
- Exponential backoff,
1s → 2s → 4s
etc - Failure resistant workers
- Simple echo
Commands
A client may send commands to the server over a duplex connection. Commands are in the form of a JSON string to allow for simple encoding/decoding.
Available commands are:
-
"INIT"
: Establishes a new consumer on the topic. If you are consuming for the first time, this should be sent along with the request. -
"ACK"
: Acknowledges the current message, popping it from the topic and removing it. -
"NACK"
: Negatively acknowledges the current message, causing it to be returned to the front of the queue. If there is a ready consumer waiting for a message, it will immediately be delivered to this consumer. Otherwise it will be delivered as as one becomes available. -
"BACK"
: Negatively acknowledges the current message, causing it to be returned to the back of the queue. This will cause it to be processed again after the currently waiting messages. -
"DACK [seconds]"
: Negatively acknowledges the current message, placing it on a delay for a certain number ofseconds
. Once the delay expires, on the next tick given by the-period
flag, the message will be returned to the front of the queue to be processed as soon as possible.DACK'ed messages will contain a
dackCount
key when consumed. This allows for doing exponential backoff for the same message if multiple failures occur.
Benchmarks
As miniqueue is still under development, take these benchmarks with a grain of salt. However, for those curious:
Publish
λ go-wrk -c 12 -d 10 -M POST -body "helloworld" https://localhost:8080/publish/test
Running 10s test @ https://localhost:8080/publish/test
12 goroutine(s) running concurrently
142665 requests in 9.919498387s, 7.89MB read
Requests/sec: 14382.28
Transfer/sec: 814.62KB
Avg Req Time: 834.36µs
Fastest Request: 190µs
Slowest Request: 141.091118ms
Number of Errors: 0
Consume + Ack
λ ./bench_consume -duration=10s
consumed 42982 times in 10s
4298 (consume+ack)/second
Running on my MacBook Pro (15-inch, 2019), with a 2.6 GHz 6-Core Intel Core i7
using Go v1.15
.
Contributing
Contributors are more than welcome. Please feel free to open a PR to improve anything you don't like, or would like to add. No PR is too small!
License
This project is licensed under the MIT license.