• Stars
    star
    3,654
  • Rank 12,105 (Top 0.3 %)
  • Language
    Rust
  • License
    Apache License 2.0
  • Created about 12 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A proxy server for adding push to your API, used at the core of Fastly's Fanout service

Pushpin

Website: https://pushpin.org/
Forum: https://community.fastly.com/c/pushpin/12

Pushpin is a reverse proxy server written in C++ and Rust that makes it easy to implement WebSocket, HTTP streaming, and HTTP long-polling services. The project is unique among realtime push solutions in that it is designed to address the needs of API creators. Pushpin is transparent to clients and integrates easily into an API stack.

How it works

Pushpin is placed in the network path between the backend and any clients:

pushpin-abstract

Pushpin communicates with backend web applications using regular, short-lived HTTP requests. This allows backend applications to be written in any language and use any webserver. There are two main integration points:

  1. The backend must handle proxied requests. For HTTP, each incoming request is proxied to the backend. For WebSockets, the activity of each connection is translated into a series of HTTP requests1 sent to the backend. Pushpin's behavior is determined by how the backend responds to these requests.
  2. The backend must tell Pushpin to push data. Regardless of how clients are connected, data may be pushed to them by making an HTTP POST request to Pushpin's private control API (http://localhost:5561/publish/ by default). Pushpin will inject this data into any client connections as necessary.

To assist with integration, there are libraries for many backend languages and frameworks. Pushpin has no libraries on the client side because it is transparent to clients.

Example

To create an HTTP streaming connection, respond to a proxied request with special headers Grip-Hold and Grip-Channel2:

HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 22
Grip-Hold: stream
Grip-Channel: test

welcome to the stream

When Pushpin receives the above response from the backend, it will process it and send an initial response to the client that instead looks like this:

HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: Transfer-Encoding

welcome to the stream

Pushpin eats the special headers and switches to chunked encoding (notice there's no Content-Length). The request between Pushpin and the backend is now complete, but the request between the client and Pushpin remains held open. The request is subscribed to a channel called test.

Data can then be pushed to the client by publishing data on the test channel:

curl -d '{ "items": [ { "channel": "test", "formats": { "http-stream": \
    { "content": "hello there\n" } } } ] }' \
    http://localhost:5561/publish

The client would then see the line "hello there" appended to the response stream. Ta-da, transparent realtime push!

For more details, see the HTTP streaming section of the documentation. Pushpin also supports HTTP long-polling and WebSockets.

Example using a library

Using a library on the backend makes integration even easier. Here's another HTTP streaming example, similar to the one shown above, except using Pushpin's Django library. Please note that Pushpin is not Python/Django-specific and there are backend libraries for other languages/frameworks, too.

The Django library requires configuration in settings.py:

MIDDLEWARE_CLASSES = (
    'django_grip.GripMiddleware',
    ...
)

GRIP_PROXIES = [{'control_uri': 'http://localhost:5561'}]

Here's a simple view:

from django.http import HttpResponse
from django_grip import set_hold_stream

def myendpoint(request):
    if request.method == 'GET':
        # subscribe every incoming request to a channel in stream mode
        set_hold_stream(request, 'test')
        return HttpResponse('welcome to the stream\n', content_type='text/plain')
    ...

What happens here is the set_hold_stream() method flags the request as needing to turn into a stream, bound to channel test. The middleware will see this and add the necessary Grip-Hold and Grip-Channel headers to the response.

Publishing data is easy:

from gripcontrol import HttpStreamFormat
from django_grip import publish

publish('test', HttpStreamFormat('hello there\n'))

Example using WebSockets

Pushpin supports WebSockets by converting connection activity/messages into HTTP requests and sending them to the backend. For this example, we'll use Pushpin's Express library. As before, please note that Pushpin is not Node/Express-specific and there are backend libraries for other languages/frameworks, too.

The Express library requires configuration and setting up middleware handlers before and after any endpoints:

var express = require('express');
var grip = require('grip');
var expressGrip = require('express-grip');

expressGrip.configure({
    gripProxies: [{'control_uri': 'http://localhost:5561', 'key': 'changeme'}]
});

var app = express();

// Add the pre-handler middleware to the front of the stack
app.use(expressGrip.preHandlerGripMiddleware);

// put your normal endpoint handlers here, for example:
app.get('/hello', function(req, res, next) {
    res.send('hello world\n');

    // next() must be called for the post-handler middleware to execute
    next();
});

// Add the post-handler middleware to the back of the stack
app.use(expressGrip.postHandlerGripMiddleware);

Because of the post-handler middleware, it's important that you call next() at the end of your handlers.

With that structure in place, here's an example of a WebSocket endpoint:

app.post('/websocket', function(req, res, next) {
    var ws = expressGrip.getWsContext(res);

    // If this is a new connection, accept it and subscribe it to a channel
    if (ws.isOpening()) {
        ws.accept();
        ws.subscribe('all');
    }

    while (ws.canRecv()) {
        var message = ws.recv();

        // If return value is null then connection is closed
        if (message == null) {
            ws.close();
            break;
        }

        // broadcast the message to everyone connected
        expressGrip.publish('all', new grip.WebSocketMessageFormat(message));
    }

    // next() must be called for the post-handler middleware to execute
    next();
});

The above code binds all incoming connections to a channel called all. Any received messages are published out to all connected clients.

What's particularly noteworthy is that the above endpoint is stateless. The app doesn't keep track of connections, and the handler code only runs whenever messages arrive. Restarting the app won't disconnect clients.

The while loop is deceptive. It looks like it's looping for the lifetime of the WebSocket connection, but what it's really doing is looping through a batch of WebSocket messages that was just received via HTTP. Often this will be one message, and so the loop performs one iteration and then exits. Similarly, the ws object only exists for the duration of the handler invocation, rather than for the lifetime of the connection as you might expect. It may look like socket code, but it's all an illusion. 🎩

For details on the underlying protocol conversion, see the WebSocket-Over-HTTP Protocol spec.

Example without a webserver

Pushpin can also connect to backend servers via ZeroMQ instead of HTTP. This may be preferred for writing lower-level services where a real webserver isn't needed. The messages exchanged over the ZeroMQ connection contain the same information as HTTP, encoded as TNetStrings.

To use a ZeroMQ backend, first make sure there's an appropriate route in Pushpin's routes file:

* zhttpreq/tcp://127.0.0.1:10000

The above line tells Pushpin to bind a REQ-compatible socket on port 10000 that handlers can connect to.

Activating an HTTP stream is as easy as responding on a REP socket:

import zmq
import tnetstring

zmq_context = zmq.Context()
sock = zmq_context.socket(zmq.REP)
sock.connect('tcp://127.0.0.1:10000')

while True:
    req = tnetstring.loads(sock.recv()[1:])

    resp = {
        'id': req['id'],
        'code': 200,
        'reason': 'OK',
        'headers': [
            ['Grip-Hold', 'stream'],
            ['Grip-Channel', 'test'],
            ['Content-Type', 'text/plain']
        ],
        'body': 'welcome to the stream\n'
    }

    sock.send('T' + tnetstring.dumps(resp))

Why another realtime solution?

Pushpin is an ambitious project with two primary goals:

  • Make realtime API development easier. There are many other solutions out there that are excellent for building realtime apps, but few are useful within the context of APIs. For example, you can't use Socket.io to build Twitter's streaming API. A new kind of project is needed in this case.
  • Make realtime push behavior delegable. The reason there isn't a realtime push CDN yet is because the standards and practices necessary for delegating to a third party in a transparent way are not yet established. Pushpin is more than just another realtime push solution; it represents the next logical step in the evolution of realtime web architectures.

To really understand Pushpin, you need to think of it as more like a gateway than a message queue. Pushpin does not persist data and it is agnostic to your application's data model. Your backend provides the mapping to whatever that data model is. Tools like Kafka and RabbitMQ are complementary. Pushpin is also agnostic to your API definition. Clients don't necessarily subscribe to "channels" or receive "messages". Clients make HTTP requests or send WebSocket frames, and your backend decides the meaning of those inputs. Pushpin could perhaps be awkwardly described as "a proxy server that enables web services to delegate the handling of realtime push primitives".

On a practical level, there are many benefits to Pushpin that you don't see anywhere else:

  • The proxy design allows Pushpin to fit nicely within an API stack. This means it can inherit other facilities from your REST API, such as authentication, logging, throttling, etc. It can be combined with an API management system.
  • As your API scales, a multi-tiered architecture will become inevitable. With Pushpin you can easily do this from the start.
  • It works well with microservices. Each microservice can have its own Pushpin instance. No central bus needed.
  • Hot reload. Restarting the backend doesn't disconnect clients.
  • In the case of WebSocket messages being proxied out as HTTP requests, the messages may be handled statelessly by the backend. Messages from a single connection can even be load balanced across a set of backend instances.

Install

Check out the the Install guide, which covers how to install and run. There are packages available for Linux (Debian, Ubuntu, CentOS, Red Hat), Mac (Homebrew), or you can build from source.

By default, Pushpin listens on port 7999 and requests are handled by its internal test handler. You can confirm the server is working by browsing to http://localhost:7999/. Next, you should modify the routes config file to route requests to your backend webserver. See Configuration.

Scalability

Pushpin is horizontally scalable. Instances don’t talk to each other, and sticky routing is not needed. Backends must publish data to all instances to ensure clients connected to any instance will receive the data. Most of the backend libraries support configuring more than one Pushpin instance, so that a single publish call will send data to multiple instances at once.

Optionally, ZeroMQ PUB/SUB can be used to send data to Pushpin instead of using HTTP POST. When this method is used, subscription information is forwarded to each publisher, such that data will only be published to instances that have listeners.

As for vertical scalability, Pushpin has been tested with up to 1 million concurrent connections running on a single DigitalOcean droplet with 8 CPU cores. In practice, you may want to plan for fewer connections per instance, depending on your throughput. The new connection accept rate is about 800/sec (though this also depends on the speed of your backend), and the message throughput is about 8,000/sec. The important thing is that Pushpin is horizontally scalable which is effectively limitless.

What does the name mean?

Pushpin means to "pin" connections open for "pushing".

License

Pushpin is offered under the Apache License, Version 2.0. See the LICENSE file.

Footnotes

1: Pushpin can communicate WebSocket activity to the backend using either HTTP or WebSockets. Conversion to HTTP is generally recommended as it makes the backend easier to reason about.

2: GRIP (Generic Realtime Intermediary Protocol) is the name of Pushpin's backend protocol. More about that here.

More Repositories

1

ftw

Framework for Testing WAFs (FTW!)
Python
264
star
2

js-compute-runtime

JavaScript SDK and runtime for building Fastly Compute applications
JavaScript
199
star
3

go-fastly

A Fastly API client for Go
Go
154
star
4

Viceroy

Viceroy provides local testing for developers working with Compute.
Rust
149
star
5

fastly-rails

Please visit https://github.com/fastly/fastly-ruby.
Ruby
143
star
6

cli

Build, deploy and configure Fastly services from your terminal
Go
139
star
7

fastly-magento2

Module for integrating Fastly CDN with Magento 2 installations
PHP
125
star
8

terraform-provider-fastly

Terraform Fastly provider
Go
119
star
9

Avalanche

Random, repeatable network fault injection
Python
104
star
10

fastly-exporter

A Prometheus exporter for the Fastly Real-time Analytics API
Go
98
star
11

fastly-ruby

A Fastly API client for Ruby
Ruby
91
star
12

compute-sdk-go

Go SDK for building Fastly Compute applications
Go
79
star
13

wafefficacy

Measures the effectiveness of your Web Application Firewall (WAF)
Go
76
star
14

fastly-py

A Fastly API client for Python
Python
76
star
15

sidekiq-prometheus

Public repository with Prometheus instrumentation for Sidekiq
Ruby
75
star
16

next-compute-js

Run Next.js on Fastly Compute
TypeScript
74
star
17

WordPress-Plugin

The Official Fastly WordPress Plugin
JavaScript
59
star
18

uslab

Lock-free slab allocator / freelist.
C
57
star
19

compute-starter-kit-rust-default

Default package template for Rust based Compute projects
Rust
50
star
20

go-utils

utils for go
Go
44
star
21

compute-actions

GitHub Actions for building on Fastly Compute.
JavaScript
41
star
22

insights.js

Real user monitoring of network timing signals using the Open Insights framework
TypeScript
40
star
23

compute-rust-auth

Authentication at Fastly's edge, using OAuth 2.0, OpenID Connect, and Fastly Compute.
Rust
38
star
24

waf_testbed

Chef Cookbook which provisions apache+mod_security+owasp-crs
HTML
35
star
25

fastlyctl

A CLI for managing Fastly configurations
Ruby
35
star
26

fastly2git

Create a git repository from Fastly service generated VCL
Ruby
32
star
27

token-functions

Example implementations for Fastly's token validation
Java
29
star
28

terrarium-rust-guest

The "http_guest" crate used by Fastly Labs Terrarium https://wasm.fastlylabs.com/
Rust
29
star
29

performance-observer-polyfill

🔎 Polyfill for the PerformanceObserver API
TypeScript
29
star
30

compute-js-static-publish

Static Publisher for Fastly Compute JavaScript
TypeScript
27
star
31

waflyctl

Fastly WAF CLI
Go
27
star
32

fastly-magento

Magento Extension for working with the Fastly Content Delivery Network
PHP
26
star
33

terrarium-templates

Template and example projects for Fastly Labs Terrarium https://wasm.fastlylabs.com
C
26
star
34

libvmod-urlcode

urlencode/urldecode functions vmod
C
24
star
35

fastly-php

A Fastly API client for PHP
PHP
24
star
36

expressly

Express style router for Fastly Compute
TypeScript
24
star
37

compute-starter-kit-rust-static-content

Static content starter kit for Rust based Fastly Compute projects. Speed up your websites with a Compute application serving content from a static bucket, redirects, security and performance headers, and a 404 page.
Rust
24
star
38

remix-compute-js

Remix for Fastly Compute JavaScript
TypeScript
22
star
39

log4j_interpreter

A Rust library for evaluating log4j substitution queries in order to determine whether or not malicious queries may exist.
Rust
22
star
40

compute-starter-kit-javascript-default

Default package template for JavaScript based Fastly Compute projects
JavaScript
22
star
41

vcl-json-generate

A VCL module that allows you to generate JSON dynamically on the edge
VCL
21
star
42

compute-starter-kit-assemblyscript-default

Default package template for AssemblyScript based Fastly Compute projects
TypeScript
20
star
43

fanout-compute-js-demo

Fanout Fastly Compute JavaScript demo
TypeScript
17
star
44

cd-with-terraform

Practical exercises for InfoQ "Continuous deployment with Terraform" workshop
HCL
16
star
45

fastly-perl

A Fastly API client for Perl
Perl
16
star
46

fastly-rust

A Rust Fastly API client library.
Rust
16
star
47

mruby-optparse

Port of Ruby's OptionParser to mruby
Ruby
16
star
48

http-compute-js

Node.js-compatible request and response objects
TypeScript
16
star
49

compute-js-opentelemetry

An implementation of OpenTelemetry for Fastly Compute
TypeScript
15
star
50

compute-at-edge-abi

Interface definitions for the Compute@Edge platform in witx.
Rust
15
star
51

blockbuster

VCR cassette manager
Ruby
13
star
52

demo-fiddle-ci

Using Fastly Fiddle to enable CI testing of Fastly services
JavaScript
13
star
53

secretd

Secret storage server
Go
12
star
54

heroku-fastly

Heroku CLI plugin for Fastly
JavaScript
10
star
55

go-mtr

go wrapped mtr --raw
Go
10
star
56

security-use-cases

Placeholder for security related use cases and demos
HCL
10
star
57

fastly-js

A Fastly API client for JavaScript
JavaScript
10
star
58

terrctl

A command-line client for Fastly Terrarium. https://wasm.fastlylabs.com
Go
10
star
59

compute-starter-kit-javascript-openapi-validation

OpenAPI Validation Starter Kit for Fastly Compute (JavaScript)
JavaScript
10
star
60

vscode-fastly-vcl

A Visual Studio Code extension which adds syntax highlighting for Fastly Varnish Configuration Language (VCL) files.
TypeScript
10
star
61

http_connection_monitor

Monitors your outbound HTTP requests for number of requests made over a persistent connection.
Ruby
9
star
62

fastly-test-blog

Test application for learning Fastly's UI
Ruby
9
star
63

uap-vcl

uap-vcl is a VCL module which parses a User-Agent string
VCL
8
star
64

sigsci-splunk-app

Splunk app for Fastly (Signal Sciences)
Python
8
star
65

librip

Librip is a minimal-overhead API for instruction-level tracing in highly concurrent software.
Python
8
star
66

fastly_nsq

Public repository with a convenience adapter & testing classes for apps talking to NSQ
Ruby
8
star
67

altitude-nyc-abcd-workshop

Practical exercises for "ABCD: Always be continuously deploying" workshop at Altitude NYC 2017
HCL
7
star
68

dnstap-utils

dnstap utilities implemented in Rust
Rust
7
star
69

compute-starter-kit-typescript

A simple Fastly starter kit for Typescript
TypeScript
6
star
70

ember-anti-clickjacking

Anti-Clickjacking in Ember
JavaScript
6
star
71

compute-starter-kit-rust-beacon-termination

Beacon Termination package template for Rust based Fastly Compute projects.
Rust
6
star
72

Raikkonen

Räikkönen tests races.
C
6
star
73

compute-hibp-filter

Fastly Compute enrichment to detect compromised passwords
Go
6
star
74

js-compute-testing

Write JavaScript tests from Node.js, against a local or remote Fastly Compute application
TypeScript
6
star
75

diff-service

An experiment in powering Edge diff functionality from Google Cloud Functions
JavaScript
6
star
76

jlog-go

Go bindings for jlog
Go
6
star
77

compute-ll-hls

Fastly Compute application for LL-HLS playlist manipulation.
Rust
6
star
78

vmdebootstrap

wrapper around debootstrap to create virtual machine disk images
Python
6
star
79

compute-starter-kit-rust-empty

Empty package template for Rust based Fastly Compute projects
Rust
5
star
80

compute-starter-kit-go-default

Default package template for Go based Fastly Compute projects
Go
5
star
81

altitude-ci-cd-workshop

Practical exercises for "Building a continuous deployment pipeline" workshop at Altitude 2017
HCL
5
star
82

fastly-blocklist

Configure request blocking for a Fastly service.
Python
5
star
83

compute-starter-kit-javascript-queue

Queuing package template for JavaScript based Fastly Compute projects. Park your users in a virtual queue to reduce the demand on your origins during peak times.
JavaScript
5
star
84

serve-vercel-build-output

A runtime environment that executes output that targets the Vercel Build Output API on Fastly Compute
TypeScript
5
star
85

edgeml-recommender

POC: similarity search recommendation engine at the edge using only Fastly Compute & Rust
Rust
4
star
86

compute-starter-kit-rust-websockets

WebSockets starter kit for Fastly Compute (Rust)
Rust
4
star
87

altitude-LON-logging-workshop

Fiddle links & exercises for "Building an internal analytics platform with real-time logs" workshop at Altitude LON 2019
4
star
88

irc2slack

Python
4
star
89

Varnish-API

Perl extension for accessing varnish stats and logs
C
4
star
90

security-solutions-visualization-waf-bigquery-looker

4
star
91

compute-starter-kit-javascript-empty

Empty package template for JavaScript based Fastly Compute projects
JavaScript
4
star
92

waf-simulator-automation

Example of automating waf tests with Fastly's waf simulator
Go
4
star
93

url-shortener

A tool makes your message easier to read
Rust
4
star
94

fastly-lem

Automate the deployment of Live Event Monitoring
Go
3
star
95

compute-js-auth

OAuth 2.0 implementation for Fastly Compute, in JavaScript
JavaScript
3
star
96

wasm-workshop-altitude-ldn-2019

Workshop materials for the "WebAssembly outside the web" workshop
Rust
3
star
97

compute-rust-sentry

Send error reports from Rust Fastly Compute services to Sentry.
Rust
3
star
98

compute-starter-kit-javascript-expressly

A lightweight starter kit for Fastly Compute, demonstrating the expressly framework.
JavaScript
3
star
99

fastly-template-rust-nel

Package template for a Rust based Network Error Logging Fastly Compute service
Rust
3
star
100

next-compute-js-server

Implementation of Next.js Server class for Fastly Compute JavaScript
TypeScript
3
star