• Stars
    star
    126
  • Rank 282,735 (Top 6 %)
  • Language
    Rust
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Message queue implemented on top of PostgreSQL

CI Status Documentation crates.io

sqlxmq

A job queue built on sqlx and PostgreSQL.

This library allows a CRUD application to run background jobs without complicating its deployment. The only runtime dependency is PostgreSQL, so this is ideal for applications already using a PostgreSQL database.

Although using a SQL database as a job queue means compromising on latency of delivered jobs, there are several show-stopping issues present in ordinary job queues which are avoided altogether.

With most other job queues, in-flight jobs are state that is not covered by normal database backups. Even if jobs are backed up, there is no way to restore both a database and a job queue to a consistent point-in-time without manually resolving conflicts.

By storing jobs in the database, existing backup procedures will store a perfectly consistent state of both in-flight jobs and persistent data. Additionally, jobs can be spawned and completed as part of other transactions, making it easy to write correct application code.

Leveraging the power of PostgreSQL, this job queue offers several features not present in other job queues.

Features

  • Send/receive multiple jobs at once.

    This reduces the number of queries to the database.

  • Send jobs to be executed at a future date and time.

    Avoids the need for a separate scheduling system.

  • Reliable delivery of jobs.

  • Automatic retries with exponential backoff.

    Number of retries and initial backoff parameters are configurable.

  • Transactional sending of jobs.

    Avoids sending spurious jobs if a transaction is rolled back.

  • Transactional completion of jobs.

    If all side-effects of a job are updates to the database, this provides true exactly-once execution of jobs.

  • Transactional check-pointing of jobs.

    Long-running jobs can check-point their state to avoid having to restart from the beginning if there is a failure: the next retry can continue from the last check-point.

  • Opt-in strictly ordered job delivery.

    Jobs within the same channel will be processed strictly in-order if this option is enabled for the job.

  • Fair job delivery.

    A channel with a lot of jobs ready to run will not starve a channel with fewer jobs.

  • Opt-in two-phase commit.

    This is particularly useful on an ordered channel where a position can be "reserved" in the job order, but not committed until later.

  • JSON and/or binary payloads.

    Jobs can use whichever is most convenient.

  • Automatic keep-alive of jobs.

    Long-running jobs will automatically be "kept alive" to prevent them being retried whilst they're still ongoing.

  • Concurrency limits.

    Specify the minimum and maximum number of concurrent jobs each runner should handle.

  • Built-in job registry via an attribute macro.

    Jobs can be easily registered with a runner, and default configuration specified on a per-job basis.

  • Implicit channels.

    Channels are implicitly created and destroyed when jobs are sent and processed, so no setup is required.

  • Channel groups.

    Easily subscribe to multiple channels at once, thanks to the separation of channel name and channel arguments.

  • NOTIFY-based polling.

    This saves resources when few jobs are being processed.

Getting started

Database schema

This crate expects certain database tables and stored procedures to exist. You can copy the migration files from this crate into your own migrations folder.

All database items created by this crate are prefixed with mq, so as not to conflict with your own schema.

Defining jobs

The first step is to define a function to be run on the job queue.

use std::error::Error;

use sqlxmq::{job, CurrentJob};

// Arguments to the `#[job]` attribute allow setting default job options.
#[job(channel_name = "foo")]
async fn example_job(
    // The first argument should always be the current job.
    mut current_job: CurrentJob,
    // Additional arguments are optional, but can be used to access context
    // provided via [`JobRegistry::set_context`].
    message: &'static str,
) -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
    // Decode a JSON payload
    let who: Option<String> = current_job.json()?;

    // Do some work
    println!("{}, {}!", message, who.as_deref().unwrap_or("world"));

    // Mark the job as complete
    current_job.complete().await?;

    Ok(())
}

Listening for jobs

Next we need to create a job runner: this is what listens for new jobs and executes them.

use std::error::Error;

use sqlxmq::JobRegistry;


#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    // You'll need to provide a Postgres connection pool.
    let pool = connect_to_db().await?;

    // Construct a job registry from our single job.
    let mut registry = JobRegistry::new(&[example_job]);
    // Here is where you can configure the registry
    // registry.set_error_handler(...)

    // And add context
    registry.set_context("Hello");

    let runner = registry
        // Create a job runner using the connection pool.
        .runner(&pool)
        // Here is where you can configure the job runner
        // Aim to keep 10-20 jobs running at a time.
        .set_concurrency(10, 20)
        // Start the job runner in the background.
        .run()
        .await?;

    // The job runner will continue listening and running
    // jobs until `runner` is dropped.
    Ok(())
}

Spawning a job

The final step is to actually run a job.

example_job.builder()
    // This is where we can override job configuration
    .set_channel_name("bar")
    .set_json("John")?
    .spawn(&pool)
    .await?;

Note on README

Most of the readme is automatically copied from the crate documentation by [cargo-readme-sync][]. This way the readme is always in sync with the docs and examples are tested.

So if you find a part of the readme you'd like to change between <!-- cargo-sync-readme start --> and <!-- cargo-sync-readme end --> markers, don't edit README.md directly, but rather change the documentation on top of src/lib.rs and then synchronize the readme with:

cargo sync-readme

(make sure the cargo command is installed):

cargo install cargo-sync-readme

More Repositories

1

ijson

More efficient alternative to `serde_json::Value` which saves memory by interning primitive values and using tagged pointers.
Rust
125
star
2

act-zero

Rust
121
star
3

rnet

Rust
79
star
4

query_interface

Dynamically query a type-erased object for any trait implementation
Rust
66
star
5

lockless

Composable, lock-free, allocation-light data structures
Rust
59
star
6

aoc2018

Rust solutions to the Advent Of Code 2018
Rust
45
star
7

meteor-reactive-publish

Enable server-side reactivity for Meteor.publish
CoffeeScript
41
star
8

spanr

Procedural macro span debugger/visualizer
Rust
40
star
9

aerosol

Dependency injection for Rust
Rust
39
star
10

rust-field-offset

Safe pointer-to-member functionality for rust
Rust
32
star
11

posts

22
star
12

actix-interop

Allow access to actor contexts from within normal future combinators and async/await blocks
Rust
14
star
13

mockalloc

Rust library for testing code relying on the global allocator
Rust
12
star
14

scoped-tls-hkt

A more flexible version of the Rust `scoped-tls` library
Rust
11
star
15

raft-playground

Rust
10
star
16

meteor-server-deps

Enable server-side reactivity for meteor
CoffeeScript
10
star
17

raft-zero

Implementation of the Raft consensus algorithm on top of the act-zero actor framework
Rust
8
star
18

aoc2019

Solutions to the Advent of Code 2019 in Rust
Rust
7
star
19

minipre

Minimal C pre-processor in Rust
Rust
7
star
20

c_fixed_string-rs

Rust
5
star
21

OpenGL-D-Bindings

Up to date stand-alone opengl bindings for the D programming language
D
5
star
22

franz

Rust Kafka client library using Tokio
Rust
4
star
23

five_words

Rust
4
star
24

agentdb

Rust
4
star
25

tenorite

Fast logic simulation Rust library
Rust
4
star
26

aoc2023

Rust
3
star
27

bevy_clap

Bevy plugin to parse CLI arguments using clap
Rust
2
star
28

pure-raft

Rust
2
star
29

test

1
star
30

please

Rust
1
star
31

ruplace_viewer

Rust
1
star
32

diesel-setup-deps

Perform diesel setup for dependencies
Rust
1
star
33

topgui

Batchfile
1
star
34

deque_cell

Repository for crates.io `deque_cell` package
Rust
1
star
35

rust-workshop

Rust
1
star
36

rust-crud-example

Example rust CRUD app using Diesel ORM & Iron web framework
Rust
1
star