• Stars
    star
    292
  • Rank 142,126 (Top 3 %)
  • Language
    Ruby
  • License
    MIT License
  • Created over 7 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Async::HTTP

An asynchronous client and server implementation of HTTP/1.0, HTTP/1.1 and HTTP/2 including TLS. Support for streaming requests and responses. Built on top of async and async-io. falcon provides a rack-compatible server.

Development Status

Installation

Add this line to your application's Gemfile:

gem 'async-http'

And then execute:

$ bundle

Or install it yourself as:

$ gem install async-http

Usage

Please see the project documentation or serve it locally using bake utopia:project:serve.

Post JSON data

Here is an example showing how to post a data structure as JSON to a remote resource:

#!/usr/bin/env ruby

require 'json'
require 'async'
require 'async/http/internet'

data = {'life' => 42}

Async do
	# Make a new internet:
	internet = Async::HTTP::Internet.new
	
	# Prepare the request:
	headers = [['accept', 'application/json']]
	body = [JSON.dump(data)]
	
	# Issues a POST request:
	response = internet.post("https://httpbin.org/anything", headers, body)
	
	# Save the response body to a local file:
	pp JSON.parse(response.read)
ensure
	# The internet is closed for business:
	internet.close
end

Consider using async-rest instead.

Multiple Requests

To issue multiple requests concurrently, you should use a barrier, e.g.

#!/usr/bin/env ruby

require 'async'
require 'async/barrier'
require 'async/http/internet'

TOPICS = ["ruby", "python", "rust"]

Async do
	internet = Async::HTTP::Internet.new
	barrier = Async::Barrier.new
	
	# Spawn an asynchronous task for each topic:
	TOPICS.each do |topic|
		barrier.async do
			response = internet.get "https://www.google.com/search?q=#{topic}"
			puts "Found #{topic}: #{response.read.scan(topic).size} times."
		end
	end
	
	# Ensure we wait for all requests to complete before continuing:
	barrier.wait
ensure
	internet&.close
end

Limiting Requests

If you need to limit the number of simultaneous requests, use a semaphore.

#!/usr/bin/env ruby

require 'async'
require 'async/barrier'
require 'async/semaphore'
require 'async/http/internet'

TOPICS = ["ruby", "python", "rust"]

Async do
	internet = Async::HTTP::Internet.new
	barrier = Async::Barrier.new
	semaphore = Async::Semaphore.new(2, parent: barrier)
	
	# Spawn an asynchronous task for each topic:
	TOPICS.each do |topic|
		semaphore.async do
			response = internet.get "https://www.google.com/search?q=#{topic}"
			puts "Found #{topic}: #{response.read.scan(topic).size} times."
		end
	end
	
	# Ensure we wait for all requests to complete before continuing:
	barrier.wait
ensure
	internet&.close
end

Persistent Connections

To keep connections alive, install the thread-local gem, require async/http/internet/instance, and use the instance, e.g.

#!/usr/bin/env ruby

require 'async'
require 'async/http/internet/instance'

Async do
  internet = Async::HTTP::Internet.instance
	response = internet.get "https://www.google.com/search?q=test"
	puts "Found #{response.read.size} results."
end

Downloading a File

Here is an example showing how to download a file and save it to a local path:

#!/usr/bin/env ruby

require 'async'
require 'async/http/internet'

Async do
	# Make a new internet:
	internet = Async::HTTP::Internet.new
	
	# Issues a GET request to Google:
	response = internet.get("https://www.google.com/search?q=kittens")
	
	# Save the response body to a local file:
	response.save("/tmp/search.html")
ensure
	# The internet is closed for business:
	internet.close
end

Basic Client/Server

Here is a basic example of a client/server running in the same reactor:

#!/usr/bin/env ruby

require 'async'
require 'async/http/server'
require 'async/http/client'
require 'async/http/endpoint'
require 'async/http/protocol/response'

endpoint = Async::HTTP::Endpoint.parse('http://127.0.0.1:9294')

app = lambda do |request|
	Protocol::HTTP::Response[200, {}, ["Hello World"]]
end

server = Async::HTTP::Server.new(app, endpoint)
client = Async::HTTP::Client.new(endpoint)
	
Async do |task|
	server_task = task.async do
		server.run
	end
	
	response = client.get("/")
	
	puts response.status
	puts response.read
	
	server_task.stop
end

Advanced Verification

You can hook into SSL certificate verification to improve server verification.

require 'async'
require 'async/http'

# These are generated from the certificate chain that the server presented.
trusted_fingerprints = {
	"dac9024f54d8f6df94935fb1732638ca6ad77c13" => true,
	"e6a3b45b062d509b3382282d196efe97d5956ccb" => true,
	"07d63f4c05a03f1c306f9941b8ebf57598719ea2" => true,
	"e8d994f44ff20dc78dbff4e59d7da93900572bbf" => true,
}

Async do
	endpoint = Async::HTTP::Endpoint.parse("https://www.codeotaku.com/index")
	
	# This is a quick hack/POC:
	ssl_context = endpoint.ssl_context
	
	ssl_context.verify_callback = proc do |verified, store_context|
		certificate = store_context.current_cert
		fingerprint = OpenSSL::Digest::SHA1.new(certificate.to_der).to_s
		
		if trusted_fingerprints.include? fingerprint
			true
		else
			Console.logger.warn("Untrusted Certificate Fingerprint"){fingerprint}
			false
		end
	end
	
	endpoint = endpoint.with(ssl_context: ssl_context)
	
	client = Async::HTTP::Client.new(endpoint)
	
	response = client.get(endpoint.path)
	
	pp response.status, response.headers.fields, response.read
end

Timeouts

Here's a basic example with a timeout:

#!/usr/bin/env ruby

require 'async/http/internet'

Async do |task|
	internet = Async::HTTP::Internet.new
	
	# Request will timeout after 2 seconds
	task.with_timeout(2) do
		response = internet.get "https://httpbin.org/delay/10"
	end
rescue Async::TimeoutError
	puts "The request timed out"
ensure
	internet&.close
end

Performance

On a 4-core 8-thread i7, running ab which uses discrete (non-keep-alive) connections:

$ ab -c 8 -t 10 http://127.0.0.1:9294/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            9294

Document Path:          /
Document Length:        13 bytes

Concurrency Level:      8
Time taken for tests:   1.869 seconds
Complete requests:      50000
Failed requests:        0
Total transferred:      2450000 bytes
HTML transferred:       650000 bytes
Requests per second:    26755.55 [#/sec] (mean)
Time per request:       0.299 [ms] (mean)
Time per request:       0.037 [ms] (mean, across all concurrent requests)
Transfer rate:          1280.29 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   0.2      0       6
Waiting:        0    0   0.2      0       6
Total:          0    0   0.2      0       6

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      1
  98%      1
  99%      1
 100%      6 (longest request)

On a 4-core 8-thread i7, running wrk, which uses 8 keep-alive connections:

$ wrk -c 8 -d 10 -t 8 http://127.0.0.1:9294/
Running 10s test @ http://127.0.0.1:9294/
  8 threads and 8 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   217.69us    0.99ms  23.21ms   97.39%
    Req/Sec    12.18k     1.58k   17.67k    83.21%
  974480 requests in 10.10s, 60.41MB read
Requests/sec:  96485.00
Transfer/sec:      5.98MB

According to these results, the cost of handling connections is quite high, while general throughput seems pretty decent.

Semantic Model

Scheme

HTTP/1 has an implicit scheme determined by the kind of connection made to the server (either http or https), while HTTP/2 models this explicitly and the client indicates this in the request using the :scheme pseudo-header (typically https). To normalize this, Async::HTTP::Client and Async::HTTP::Server have a default scheme which is used if none is supplied.

Version

HTTP/1 has an explicit version while HTTP/2 does not expose the version in any way.

Reason

HTTP/1 responses contain a reason field which is largely irrelevant. HTTP/2 does not support this field.

Contributing

We welcome contributions to this project.

  1. Fork it.
  2. Create your feature branch (git checkout -b my-new-feature).
  3. Commit your changes (git commit -am 'Add some feature').
  4. Push to the branch (git push origin my-new-feature).
  5. Create new Pull Request.

See Also

  • benchmark-http — A benchmarking tool to report on web server concurrency.
  • falcon — A rack compatible server built on top of async-http.
  • async-websocket — Asynchronous client and server websockets.
  • async-rest — A RESTful resource layer built on top of async-http.
  • async-http-faraday — A faraday adapter to use async-http.

More Repositories

1

falcon

A high-performance web server for Ruby, supporting HTTP/1, HTTP/2 and TLS.
Ruby
2,512
star
2

async

An awesome asynchronous event-driven reactor for Ruby.
Ruby
2,008
star
3

nio4r

Cross-platform asynchronous I/O primitives for scalable network clients and servers.
C
961
star
4

rubydns

A DSL for building fun, high-performance DNS servers.
Ruby
707
star
5

cool.io

Simple evented I/O for Ruby (but please check out Celluloid::IO instead)
C
693
star
6

timers

Pure Ruby timers collections suitable for use with event loops
Ruby
337
star
7

multipart-post

Adds multipart POST capability to net/http
Ruby
291
star
8

localhost

Ruby
208
star
9

async-io

Concurrent wrappers for native Ruby IO & Sockets.
Ruby
205
star
10

lightio

LightIO is a userland implemented green thread library for ruby
Ruby
163
star
11

sus

Ruby
156
star
12

async-websocket

Asynchronous WebSocket client and server, supporting HTTP/1 and HTTP/2 for Ruby.
Ruby
156
star
13

utopia

A content-centric Ruby/Rack based web framework.
Ruby
139
star
14

cloudflare

An asynchronous Ruby wrapper for the CloudFlare V4 API.
Ruby
137
star
15

socketry

High-level wrappers for Ruby sockets with advanced thread-safe timeout support
Ruby
132
star
16

xrb

Ruby
102
star
17

async-dns

An asynchronous DNS resolver and server.
Ruby
96
star
18

async-redis

Ruby
83
star
19

http-accept

Parse Accept and Accept-Language HTTP headers in Ruby.
Ruby
81
star
20

async-postgres

Ruby
78
star
21

async-container

Scalable multi-thread multi-process containers for Ruby.
Ruby
78
star
22

async-http-faraday

Ruby
74
star
23

async-await

Why wait? It's available today!
Ruby
69
star
24

rackula

Generate a static site from any rack middleware.
Ruby
66
star
25

live

Ruby
64
star
26

flappy-bird

Ruby
59
star
27

io-event

C
57
star
28

async-rspec

Ruby
54
star
29

db

Event-driven database drivers for streaming queries.
Ruby
49
star
30

console

Ruby
49
star
31

roda-websockets

Asynchronous WebSockets plugin for Roda.
Ruby
47
star
32

process-metrics

Ruby
35
star
33

db-postgres

Ruby
33
star
34

async-rest

Ruby
31
star
35

lively

JavaScript
29
star
36

async-job

Ruby
27
star
37

falcon-rails-example

Ruby
25
star
38

async-pool

Provides support for connection pooling both singleplex and multiplex resources.
Ruby
24
star
39

async-process

Ruby
22
star
40

protocol-http

Ruby
22
star
41

utopia-project

JavaScript
21
star
42

guard-falcon

Ruby
21
star
43

async-actor

Ruby
19
star
44

falcon-capybara

Ruby
19
star
45

benchmark-http

Ruby
18
star
46

async-examples

Ruby
17
star
47

rspec-memory

Ruby
16
star
48

async-webdriver

Ruby
16
star
49

fiber-local

Ruby
16
star
50

traces

Ruby
15
star
51

cloudflare-dns-update

A Ruby script which can update CloudFlare periodically to provide dynamic DNS.
Ruby
15
star
52

thread-local

Ruby
14
star
53

async-sequel

Ruby
13
star
54

async-mysql

Ruby
12
star
55

falcon-benchmark

A work in progress synthetic benchmark comparing Falcon with other servers.
JavaScript
11
star
56

protocol-websocket

Provides a low-level implementation of the WebSocket protocol according to RFC6455.
Ruby
10
star
57

async-ollama

Ruby
10
star
58

db-mariadb

Ruby
10
star
59

async-job-rails-example

Ruby
10
star
60

protocol-quic

C++
9
star
61

xrb-sanitize

Sanitize markup by adding, changing or removing tags.
Ruby
9
star
62

protocol-http2

Ruby
8
star
63

async-job-adapter-active_job

Ruby
8
star
64

variant

Ruby
8
star
65

metrics

Ruby
8
star
66

rack-conform

Ruby
7
star
67

async-limiter

Async limiter for ruby.
Ruby
7
star
68

sus-vscode

TypeScript
7
star
69

xrb-rails

Ruby
7
star
70

protocol-http1

Ruby
6
star
71

async-worker

Ruby
5
star
72

memory

Ruby
5
star
73

async-http-cache

Ruby
5
star
74

live-js

JavaScript
5
star
75

db-active_record

Ruby
5
star
76

protocol-hpack

Ruby
4
star
77

db-model

Ruby
4
star
78

console-adapter-rails

Ruby
4
star
79

async-debug

JavaScript
4
star
80

io-stream

Ruby
4
star
81

protocol-rack

Ruby
4
star
82

falcon-my_api

Ruby
3
star
83

xrb-formatters

Formatters for Trenni, to assist with typical views and form based interfaces.
Ruby
3
star
84

katacoda

Katacoda Tutorials
Shell
3
star
85

rails-falcon-heroku

Ruby
3
star
86

community

3
star
87

async-bus

Ruby
3
star
88

io-endpoint

Ruby
3
star
89

traces-backend-datadog

Ruby
3
star
90

migrate

Ruby
3
star
91

utopia-falcon-heroku

JavaScript
3
star
92

xrb-vscode

2
star
93

sus-fixtures-openssl

Ruby
2
star
94

async-service

Ruby
2
star
95

lively-falcon

Ruby
2
star
96

async-cable

Ruby
2
star
97

utopia-wiki

JavaScript
1
star
98

console-adapter-sidekiq

Ruby
1
star
99

sus-fixtures-async

Ruby
1
star
100

console-output-datadog

Ruby
1
star