Tesla is an HTTP client loosely based on Faraday. It embraces the concept of middleware when processing the request/response cycle.
Note that this README refers to the
master
branch of Tesla, not the latest released version on Hex. See the documentation for the documentation of the version you're using.
For the list of changes, checkout the latest release notes.
Define module with use Tesla
and choose from a variety of middleware.
defmodule GitHub do
use Tesla
plug Tesla.Middleware.BaseUrl, "https://api.github.com"
plug Tesla.Middleware.Headers, [{"authorization", "token xyz"}]
plug Tesla.Middleware.JSON
def user_repos(login) do
get("/users/" <> login <> "/repos")
end
end
Then use it like this:
{:ok, response} = GitHub.user_repos("teamon")
response.status
# => 200
response.body
# => [%{β¦}, β¦]
response.headers
# => [{"content-type", "application/json"}, ...]
See below for documentation.
Add :tesla
as dependency in mix.exs
:
defp deps do
[
{:tesla, "~> 1.9"},
# optional, but recommended adapter
{:hackney, "~> 1.20"},
# optional, required by JSON middleware
{:jason, "~> 1.4"}
]
end
Tesla uses Semantic Versioning 2.0.
Configure default adapter in config/config.exs
(optional).
# config/config.exs
config :tesla, adapter: Tesla.Adapter.Hackney
The default adapter is erlang's built-in
httpc
, but it is not recommended to use it in production environment as it does not validate SSL certificates among other issues.
- Middleware
- Runtime middleware
- Adapters
- Streaming
- Multipart
- Testing
- Writing middleware
- Direct usage
- Cheatsheet
- Cookbook
- Changelog
Tesla is built around the concept of composable middlewares. This is very similar to how Plug Router works.
Tesla.Middleware.BaseUrl
- set base urlTesla.Middleware.Headers
- set request headersTesla.Middleware.Query
- set query parametersTesla.Middleware.Opts
- set request optionsTesla.Middleware.FollowRedirects
- follow HTTP 3xx redirectsTesla.Middleware.MethodOverride
- setX-Http-Method-Override
headerTesla.Middleware.Logger
- log requests (method, url, status, and time)Tesla.Middleware.KeepRequest
- keep requestbody
andheaders
Tesla.Middleware.PathParams
- use templated URLs
Tesla.Middleware.FormUrlencoded
- urlencode POST body, useful for POSTing a map/keyword listTesla.Middleware.JSON
- JSON request/response bodyTesla.Middleware.Compression
-gzip
anddeflate
Tesla.Middleware.DecodeRels
- decodeLink
header intoopts[:rels]
field in response
Tesla.Middleware.BasicAuth
- HTTP Basic AuthTesla.Middleware.BearerAuth
- HTTP Bearer AuthTesla.Middleware.DigestAuth
- Digest access authentication
Tesla.Middleware.Timeout
- timeout request after X milliseconds despite of server responseTesla.Middleware.Retry
- retry few times in case of connection refusedTesla.Middleware.Fuse
- fuse circuit breaker integration
All HTTP functions, such as Tesla.get/3
and Tesla.post/4
, can take a dynamic client as the first argument.
This allows to use convenient syntax for modifying the behaviour in runtime.
Consider the following case: GitHub API can be accessed using OAuth token authorization.
We can't use plug Tesla.Middleware.Headers, [{"authorization", "token here"}]
since this would be compiled only once and there is no way to insert dynamic user token.
Instead, we can use Tesla.client
to create a client with dynamic middleware:
defmodule GitHub do
# notice there is no `use Tesla`
def user_repos(client, login) do
# pass `client` argument to `Tesla.get` function
Tesla.get(client, "/user/" <> login <> "/repos")
end
def issues(client) do
Tesla.get(client, "/issues")
end
# build dynamic client based on runtime arguments
def client(token) do
middleware = [
{Tesla.Middleware.BaseUrl, "https://api.github.com"},
Tesla.Middleware.JSON,
{Tesla.Middleware.Headers, [{"authorization", "token: " <> token }]}
]
Tesla.client(middleware)
end
end
and then:
client = GitHub.client(user_token)
client |> GitHub.user_repos("teamon")
client |> GitHub.get("/me")
Tesla supports multiple HTTP adapter that do the actual HTTP request processing.
Tesla.Adapter.Httpc
- the default, built-in erlang httpc adapterTesla.Adapter.Hackney
- hackney, "simple HTTP client in Erlang"Tesla.Adapter.Ibrowse
- ibrowse, "Erlang HTTP client"Tesla.Adapter.Gun
- gun, "HTTP/1.1, HTTP/2 and Websocket client for Erlang/OTP"Tesla.Adapter.Mint
- mint, "Functional HTTP client for Elixir with support for HTTP/1 and HTTP/2"Tesla.Adapter.Finch
- finch, "An HTTP client with a focus on performance, built on top of Mint and NimblePool."
When using adapter other than :httpc
remember to add it to the dependencies list in mix.exs
defp deps do
[
{:tesla, "~> 1.9"},
{:hackney, "~> 1.20"} # when using hackney adapter
]
end
In case there is a need to pass specific adapter options you can do it in one of four ways:
Supplying them as a keyword list in a tuple via config:
config :tesla, adapter: {Tesla.Adapter.Hackney, [recv_timeout: 30_000]}
Using adapter
macro:
defmodule GitHub do
use Tesla
adapter Tesla.Adapter.Hackney, recv_timeout: 30_000, ssl_options: [certfile: "certs/client.crt"]
end
Using Tesla.client/2
:
def new(...) do
middleware = [...]
adapter = {Tesla.Adapter.Hackney, [recv_timeout: 30_000]}
Tesla.client(middleware, adapter)
end
Passing directly to request functions such as MyClient.get/3
or Tesla.get/3
.
MyClient.get("/", opts: [adapter: [recv_timeout: 30_000]])
Tesla.get(client, "/", opts: [adapter: [recv_timeout: 30_000]])
If adapter supports it, you can pass a Stream as request body, e.g.:
defmodule ElasticSearch do
use Tesla
plug Tesla.Middleware.BaseUrl, "http://localhost:9200"
plug Tesla.Middleware.JSON
def index(records_stream) do
stream = records_stream |> Stream.map(fn record -> %{index: [some, data]} end)
post("/_bulk", stream)
end
end
Each piece of stream will be encoded as JSON and sent as a new line (conforming to JSON stream format).
If adapter supports it, you can pass a response: :stream
option to return
response body as a
Stream
defmodule OpenAI do
def new(token) do
middleware = [
{Tesla.Middleware.BaseUrl, "https://api.openai.com/v1"},
{Tesla.Middleware.BearerAuth, token: token},
{Tesla.Middleware.JSON, decode_content_types: ["text/event-stream"]},
{Tesla.Middleware.SSE, only: :data}
]
Tesla.client(middleware, {Tesla.Adapter.Finch, name: MyFinch})
end
def completion(client, prompt) do
data = %{
model: "gpt-3.5-turbo",
messages: [%{role: "user", content: prompt}],
stream: true
}
Tesla.post(client, "/chat/completions", data, opts: [adapter: [response: :stream]])
end
end
client = OpenAI.new("<token>")
{:ok, env} = OpenAI.completion(client, "What is the meaning of life?")
env.body
|> Stream.each(fn chunk -> IO.inspect(chunk) end)
You can pass a Tesla.Multipart
struct as the body:
alias Tesla.Multipart
mp =
Multipart.new()
|> Multipart.add_content_type_param("charset=utf-8")
|> Multipart.add_field("field1", "foo")
|> Multipart.add_field("field2", "bar",
headers: [{"content-id", "1"}, {"content-type", "text/plain"}]
)
|> Multipart.add_file("test/tesla/multipart_test_file.sh")
|> Multipart.add_file("test/tesla/multipart_test_file.sh", name: "foobar")
|> Multipart.add_file_content("sample file content", "sample.txt")
{:ok, response} = MyApiClient.post("https://httpbin.org/post", mp)
You can set the adapter to Tesla.Mock
in tests:
# config/test.exs
# Use mock adapter for all clients
config :tesla, adapter: Tesla.Mock
# or only for one
config :tesla, MyApi, adapter: Tesla.Mock
Then, mock requests before using your client:
defmodule MyAppTest do
use ExUnit.Case
import Tesla.Mock
setup do
mock(fn
%{method: :get, url: "https://example.com/hello"} ->
%Tesla.Env{status: 200, body: "hello"}
%{method: :post, url: "https://example.com/world"} ->
json(%{"my" => "data"})
end)
:ok
end
test "list things" do
assert {:ok, %Tesla.Env{} = env} = MyApi.get("https://example.com/hello")
assert env.status == 200
assert env.body == "hello"
end
end
A Tesla middleware is a module with c:Tesla.Middleware.call/3
function, that at some point calls Tesla.run/2
with env
and next
to process the rest of stack.
defmodule MyMiddleware do
@behaviour Tesla.Middleware
def call(env, next, options) do
env
|> do_something_with_request()
|> Tesla.run(next)
|> do_something_with_response()
end
end
The arguments are:
env
-Tesla.Env
instancenext
- middleware continuation stack; to be executed withTesla.run/2
withenv
andnext
options
- arguments passed during middleware configuration (plug MyMiddleware, options
)
There is no distinction between request and response middleware, it's all about executing Tesla.run/2
function at the correct time.
For example, a request logger middleware could be implemented like this:
defmodule Tesla.Middleware.RequestLogger do
@behaviour Tesla.Middleware
def call(env, next, _) do
env
|> IO.inspect()
|> Tesla.run(next)
end
end
and response logger middleware like this:
defmodule Tesla.Middleware.ResponseLogger do
@behaviour Tesla.Middleware
def call(env, next, _) do
env
|> Tesla.run(next)
|> IO.inspect()
end
end
See built-in middlewares for more examples.
Middleware should have documentation following this template:
defmodule Tesla.Middleware.SomeMiddleware do
@moduledoc """
Short description what it does
Longer description, including e.g. additional dependencies.
### Examples
```elixir
defmodule MyClient do
use Tesla
plug Tesla.Middleware.SomeMiddleware, most: :common, options: "here"
end
```
### Options
- `:list` - all possible options
- `:with` - their default values
"""
@behaviour Tesla.Middleware
end
You can also use Tesla directly, without creating a client module. This however wonβt include any middleware.
# Example get request
{:ok, response} = Tesla.get("https://httpbin.org/ip")
response.status
# => 200
response.body
# => "{\n "origin": "87.205.72.203"\n}\n"
response.headers
# => [{"content-type", "application/json" ...}]
{:ok, response} = Tesla.get("https://httpbin.org/get", query: [a: 1, b: "foo"])
# Example post request
{:ok, response} =
Tesla.post("https://httpbin.org/post", "data", headers: [{"content-type", "application/json"}])
# GET /path
get("/path")
# GET /path?a=hi&b[]=1&b[]=2&b[]=3
get("/path", query: [a: "hi", b: [1, 2, 3]])
# GET with dynamic client
get(client, "/path")
get(client, "/path", query: [page: 3])
# arguments are the same for GET, HEAD, OPTIONS & TRACE
head("/path")
options("/path")
trace("/path")
# POST, PUT, PATCH
post("/path", "some-body-i-used-to-know")
put("/path", "some-body-i-used-to-know", query: [a: "0"])
patch("/path", multipart)
# generate only get and post function
use Tesla, only: ~w(get post)a
# generate only delete function
use Tesla, only: [:delete]
# generate all functions except delete and options
use Tesla, except: [:delete, :options]
use Tesla, docs: false
plug Tesla.Middleware.EncodeJson
plug Tesla.Middleware.DecodeJson
# use JSX
plug Tesla.Middleware.JSON, engine: JSX, engine_opts: [strict: [:comments]]
# use custom functions
plug Tesla.Middleware.JSON, decode: &JSX.decode/1, encode: &JSX.encode/1
defmodule Tesla.Middleware.MyCustomMiddleware do
def call(env, next, options) do
env
|> do_something_with_request()
|> Tesla.run(next)
|> do_something_with_response()
end
end
- Fork it (https://github.com/teamon/tesla/fork)
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create new Pull Request
This project is licensed under the MIT License - see the LICENSE file for details
Copyright (c) 2015-2021 Tymon Tobolski
This project is sponsored by ubots - Useful bots for Slack