• Stars
    star
    776
  • Rank 58,561 (Top 2 %)
  • Language
    Python
  • License
    ISC License
  • Created over 4 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Instrument your FastAPI with Prometheus metrics.

Prometheus FastAPI Instrumentator

pypi-version python-versions downloads build codecov

A configurable and modular Prometheus Instrumentator for your FastAPI. Install prometheus-fastapi-instrumentator from PyPI. Here is the fast track to get started with a pre-configured instrumentator. Import the instrumentator class:

from prometheus_fastapi_instrumentator import Instrumentator

Instrument your app with default metrics and expose the metrics:

Instrumentator().instrument(app).expose(app)

Depending on your code you might have to use the following instead:

instrumentator = Instrumentator().instrument(app)

@app.on_event("startup")
async def _startup():
    instrumentator.expose(app)

With this, your FastAPI is instrumented and metrics are ready to be scraped. The defaults give you:

  • Counter http_requests_total with handler, status and method. Total number of requests.
  • Summary http_request_size_bytes with handler. Added up total of the content lengths of all incoming requests.
  • Summary http_response_size_bytes with handler. Added up total of the content lengths of all outgoing responses.
  • Histogram http_request_duration_seconds with handler. Only a few buckets to keep cardinality low.
  • Histogram http_request_duration_highr_seconds without any labels. Large number of buckets (>20).

In addition, following behavior is active:

  • Status codes are grouped into 2xx, 3xx and so on.
  • Requests without a matching template are grouped into the handler none.

If one of these presets does not suit your needs you can do one of multiple things:

  • Pick one of the already existing closures from metrics and pass it to the instrumentator instance. See here how to do that.
  • Create your own instrumentation function that you can pass to an instrumentator instance. See here to learn how more.
  • Don't use this package at all and just use the source code as inspiration on how to instrument your FastAPI.

Important: This package is not made for generic Prometheus instrumentation in Python. Use the Prometheus client library for that. This packages uses it as well.

Table of Contents

Features

Beyond the fast track, this instrumentator is highly configurable and it is very easy to customize and adapt to your specific use case. Here is a list of some of these options you may opt-in to:

  • Regex patterns to ignore certain routes.
  • Completely ignore untemplated routes.
  • Control instrumentation and exposition with an env var.
  • Rounding of latencies to a certain decimal number.
  • Renaming of labels and the metric.
  • Metrics endpoint can compress data with gzip.
  • Opt-in metric to monitor the number of requests in progress.

It also features a modular approach to metrics that should instrument all FastAPI endpoints. You can either choose from a set of already existing metrics or create your own. And every metric function by itself can be configured as well. You can see ready to use metrics here.

Advanced Usage

This chapter contains an example on the advanced usage of the Prometheus FastAPI Instrumentator to showcase most of it's features. Fore more concrete info check out the automatically generated documentation.

Creating the Instrumentator

We start by creating an instance of the Instrumentator. Notice the additional metrics import. This will come in handy later.

from prometheus_fastapi_instrumentator import Instrumentator, metrics

instrumentator = Instrumentator(
    should_group_status_codes=False,
    should_ignore_untemplated=True,
    should_respect_env_var=True,
    should_instrument_requests_inprogress=True,
    excluded_handlers=[".*admin.*", "/metrics"],
    env_var_name="ENABLE_METRICS",
    inprogress_name="inprogress",
    inprogress_labels=True,
)

Unlike in the fast track example, now the instrumentation and exposition will only take place if the environment variable ENABLE_METRICS is true at run-time. This can be helpful in larger deployments with multiple services depending on the same base FastAPI.

Adding metrics

Let's say we also want to instrument the size of requests and responses. For this we use the add() method. This method does nothing more than taking a function and adding it to a list. Then during run-time every time FastAPI handles a request all functions in this list will be called while giving them a single argument that stores useful information like the request and response objects. If no add() at all is used, the default metric gets added in the background. This is what happens in the fast track example.

All instrumentation functions are stored as closures in the metrics module. Fore more concrete info check out the automatically generated documentation.

Closures come in handy here because it allows us to configure the functions within.

instrumentator.add(metrics.latency(buckets=(1, 2, 3,)))

This simply adds the metric you also get in the fast track example with a modified buckets argument. But we would also like to record the size of all requests and responses.

instrumentator.add(
    metrics.request_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="a",
        metric_subsystem="b",
    )
).add(
    metrics.response_size(
        should_include_handler=True,
        should_include_method=False,
        should_include_status=True,
        metric_namespace="namespace",
        metric_subsystem="subsystem",
    )
)

You can add as many metrics you like to the instrumentator.

Creating new metrics

As already mentioned, it is possible to create custom functions to pass on to add(). This is also how the default metrics are implemented. The documentation and code here is helpful to get an overview.

The basic idea is that the instrumentator creates an info object that contains everything necessary for instrumentation based on the configuration of the instrumentator. This includes the raw request and response objects but also the modified handler, grouped status code and duration. Next, all registered instrumentation functions are called. They get info as their single argument.

Let's say we want to count the number of times a certain language has been requested.

from typing import Callable
from prometheus_fastapi_instrumentator.metrics import Info
from prometheus_client import Counter

def http_requested_languages_total() -> Callable[[Info], None]:
    METRIC = Counter(
        "http_requested_languages_total",
        "Number of times a certain language has been requested.",
        labelnames=("langs",)
    )

    def instrumentation(info: Info) -> None:
        langs = set()
        lang_str = info.request.headers["Accept-Language"]
        for element in lang_str.split(","):
            element = element.split(";")[0].strip().lower()
            langs.add(element)
        for language in langs:
            METRIC.labels(language).inc()

    return instrumentation

The function http_requested_languages_total is used for persistent elements that are stored between all instrumentation executions (for example the metric instance itself). Next comes the closure. This function must adhere to the shown interface. It will always get an Info object that contains the request, response and a few other modified informations. For example the (grouped) status code or the handler. Finally, the closure is returned.

Important: The response object inside info can either be the response object or None. In addition, errors thrown in the handler are not caught by the instrumentator. I recommend to check the documentation and/or the source code before creating your own metrics.

To use it, we hand over the closure to the instrumentator object.

instrumentator.add(http_requested_languages_total())

Perform instrumentation

Up to this point, the FastAPI has not been touched at all. Everything has been stored in the instrumentator only. To actually register the instrumentation with FastAPI, the instrument() method has to be called.

instrumentator.instrument(app)

Notice that this will do nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Specify namespace and subsystem

You can specify the namespace and subsystem of the metrics by passing them in the instrument method.

from prometheus_fastapi_instrumentator import Instrumentator

@app.on_event("startup")
async def startup():
    Instrumentator().instrument(app, metric_namespace='myproject', metric_subsystem='myservice').expose(app)

Then your metrics will contain the namespace and subsystem in the metric name.

# TYPE myproject_myservice_http_request_duration_highr_seconds histogram
myproject_myservice_http_request_duration_highr_seconds_bucket{le="0.01"} 0.0

Exposing endpoint

To expose an endpoint for the metrics either follow Prometheus Python Client and add the endpoint manually to the FastAPI or serve it on a separate server. You can also use the included expose method. It will add an endpoint to the given FastAPI. With should_gzip you can instruct the endpoint to compress the data as long as the client accepts gzip encoding. Prometheus for example does by default. Beware that network bandwith is often cheaper than CPU cycles.

instrumentator.expose(app, include_in_schema=False, should_gzip=True)

Notice that this will to nothing if should_respect_env_var has been set during construction of the instrumentator object and the respective env var is not found.

Contributing

Please refer to CONTRIBUTING.md.

Consult DEVELOPMENT.md for guidance regarding development.

Read RELEASE.md for details about the release process.

Licensing

The default license for this project is the ISC License. A permissive license functionally equivalent to the BSD 2-Clause and MIT licenses, removing some language that is no longer necessary. See LICENSE for the license text.

The BSD 3-Clause License is used as the license for the routing module. This is due to it containing code from elastic/apm-agent-python. BSD 3-Clause is a permissive license similar to the BSD 2-Clause License, but with a 3rd clause that prohibits others from using the name of the copyright holder or its contributors to promote derived products without written consent. The license text is included in the module itself.

More Repositories

1

prometheus-ecs-discoverer

Prometheus Service Discovery for ECS
Python
13
star
2

prometheus-alert-model-for-python

Python Pydantic model of the Prometheus Alertmanager alert payload
Python
4
star
3

ansible-role-asdf-plugin

Ansible role that installs any asdf plugin and package
JavaScript
4
star
4

logsh

Simple POSIX-compliant logging for Shell scripts
Shell
3
star
5

prometheus-flask-instrumentator

Small package to instrument your Flask app transparently
Python
3
star
6

ansible-role-endlessh

The SSH tar pit Endlessh. This Ansible role install the APT package and configures everything.
3
star
7

ansible-roles-depot

Set of personal roles I use in my playbooks via symlinking
Shell
3
star
8

ansible-role-fail2ban

Install and configure fail2ban.
2
star
9

asdf-gopass

Gopass plugin for the asdf version manager
Shell
2
star
10

ansible-role-asdf

Ansible role that installs asdf
2
star
11

pre-commit-mirror-gofumpt

Mirror of gofumpt for pre-commit.
Go
2
star
12

ansible-role-keychain

Install and configure Keychain ssh-agent mgmt utility.
1
star
13

ansible-role-awscli

Role that installs the AWS CLI v2 and the SSM plugin using official installers.
1
star
14

testbench-sardine

Personal testbench for trying out stuff.
Python
1
star
15

ansible-role-pipx

Ansible role that installs Pipx on Linux
1
star
16

my-ansible-role-homebrew

1
star
17

vault-as-code

Manage your secrets in HashiCorp Vault declaratively as code
1
star
18

my-ansible-role-gpg

Shell
1
star
19

token2go-server

Augmentation to auth schemas that provides access to secrets.
Go
1
star
20

bake

Simple bash script for simple things that don't require Makefile
1
star
21

asdf-kubens

Kubens plugin for the asdf version manager
Shell
1
star
22

ansible-role-poetry

Ansible role that install Python Poetry on Linux
1
star
23

kubestatus2cloudwatch

Watches the status of resources in a Kubernetes cluster and uses the results to update a metric in Amazon CloudWatch.
Go
1
star
24

prymitive-karma-debian

Alert dashboard prymitive/karma as Debian docker image
Dockerfile
1
star
25

ansible-role-pyenv

Ansible role that installs pyenv on Linux Ubuntu
1
star