• Stars
    star
    799
  • Rank 57,011 (Top 2 %)
  • Language
    Rust
  • License
    Apache License 2.0
  • Created almost 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry.

GitHub_headerImage

Documentation Crates.io Discord Shield

Metrics are a powerful and cost-efficient tool for understanding the health and performance of your code in production. But it's hard to decide what metrics to track and even harder to write queries to understand the data.

Autometrics provides a macro that makes it trivial to instrument any function with the most useful metrics: request rate, error rate, and latency. It standardizes these metrics and then generates powerful Prometheus queries based on your function details to help you quickly identify and debug issues in production.

Benefits

  • #[autometrics] macro adds useful metrics to any function or impl block, without you thinking about what metrics to collect
  • 💡 Generates powerful Prometheus queries to help quickly identify and debug issues in production
  • 🔗 Injects links to live Prometheus charts directly into each function's doc comments
  • 📊 Grafana dashboards work without configuration to visualize the performance of functions & SLOs
  • 🔍 Correlates your code's version with metrics to help identify commits that introduced errors or latency
  • 📏 Standardizes metrics across services and teams to improve debugging
  • ⚖️ Function-level metrics provide useful granularity without exploding cardinality
  • ⚡ Minimal runtime overhead

Advanced Features

See autometrics.dev for more details on the ideas behind autometrics.

Example + Demo

use autometrics::autometrics;

#[autometrics]
pub async fn create_user() {
  // Now this function produces metrics! 📈
}

Here is a demo of jumping from function docs to live Prometheus charts:

Autometrics.Demo.mp4

Quickstart

  1. Add autometrics to your project:

    cargo add autometrics --features=prometheus-exporter
  2. Instrument your functions with the #[autometrics] macro

    use autometrics::autometrics;
    
    // Just add the autometrics annotation to your functions
    #[autometrics]
    pub async fn my_function() {
      // Now this function produces metrics!
    }
    
    struct MyStruct;
    
    // You can also instrument whole impl blocks
    #[autometrics]
    impl MyStruct {
      pub fn my_method() {
        // This method produces metrics too!
      }
    }
    Tip: Adding autometrics to all functions using the tracing::instrument macro

    You can use a search and replace to add autometrics to all functions instrumented with tracing::instrument.

    Replace:

    #[instrument]

    With:

    #[instrument]
    #[autometrics]

    And then let Rust Analyzer tell you which files you need to add use autometrics::autometrics at the top of.

    Tip: Adding autometrics to all pub functions (not necessarily recommended 😅)

    You can use a search and replace to add autometrics to all public functions. Yes, this is a bit nuts.

    Use a regular expression search to replace:

    (pub (?:async)? fn.*)
    

    With:

    #[autometrics]
    $1
    

    And then let Rust Analyzer tell you which files you need to add use autometrics::autometrics at the top of.

  3. Export the metrics for Prometheus

    For projects not currently using Prometheus metrics

    Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.

    In your main function, initialize the prometheus_exporter:

    pub fn main() {
      prometheus_exporter::init();
      // ...
    }

    And create a route on your API (probably mounted under /metrics) that returns the following:

    use autometrics::prometheus_exporter::{self, PrometheusResponse};
    
    /// Export metrics for Prometheus to scrape
    pub fn get_metrics() -> PrometheusResponse {
      prometheus_exporter::encode_http_response()
    }
    For projects already using custom Prometheus metrics

    Configure autometrics to use the same underlying metrics library you use with the feature flag corresponding to the crate and version you are using.

    [dependencies]
    autometrics = {
      version = "*",
      features = ["prometheus-0_13"],
      default-features = false
    }

    The autometrics metrics will be produced alongside yours.

    Note

    You must ensure that you are using the exact same version of the library as autometrics. If not, the autometrics metrics will not appear in your exported metrics. This is because Cargo will include both versions of the crate and the global statics used for the metrics registry will be different.

    You do not need to use the Prometheus exporter functions this library provides (you can leave out the prometheus-exporter feature flag) and you do not need a separate endpoint for autometrics' metrics.

  4. Run Prometheus locally with the Autometrics CLI or configure it manually to scrape your metrics endpoint

  5. (Optional) If you have Grafana, import the Autometrics dashboards for an overview and detailed view of the function metrics

API Docs

Examples

Open in Gitpod

To see autometrics in action:

  1. Install prometheus locally or download the Autometrics CLI which will install and configure Prometheus for you locally.

  2. Run the complete example:

    cargo run -p example-full-api
  3. Hover over the function names to see the generated query links (like in the image above) and view the Prometheus charts

Benchmarks

Using each of the following metrics libraries, tracking metrics with the autometrics macro adds approximately:

  • prometheus-0_13: 140-150 nanoseconds
  • prometheus-client-0_21: 150-250 nanoseconds
  • metrics-0_21: 550-650 nanoseconds
  • opentelemetry-0_20: 1700-2100 nanoseconds

These were calculated on a 2021 MacBook Pro with the M1 Max chip and 64 GB of RAM.

To run the benchmarks yourself, run the following command, replacing BACKEND with the metrics library of your choice:

cargo bench --features prometheus-exporter,BACKEND

Contributing

Issues, feature suggestions, and pull requests are very welcome!

If you are interested in getting involved:

More Repositories

1

autometrics-py

Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry.
Python
215
star
2

autometrics-ts

Easily add metrics to your system – and actually understand them using automatically customized Prometheus queries
TypeScript
135
star
3

autometrics-go

Easily add metrics to your system -- and actually understand them using automatically customized Prometheus queries
Go
134
star
4

autometrics-cs

A C# Implementation of Autometrics
C#
23
star
5

diff-metrics

A github action that comments on PR to describe impact on metrics
TypeScript
13
star
6

vscode-autometrics

A vscode extension to extract information from your autometrics enabled code.
TypeScript
13
star
7

autometrics-rb

Ruby
7
star
8

autometrics-shared

Resources used by all of the autometrics implementations
6
star
9

quickmetrics

a docker compose setup with prometheus, alertmanager, grafana, an aggregation gateway
TypeScript
6
star
10

am

Autometrics Companion CLI app
Rust
4
star
11

docs

Documentation site for autometrics
TypeScript
3
star
12

am_list

A small tree-sitter powered CLI that lists function names which are decorated according to Autometrics rules
Rust
3
star
13

gettingstarted-am-go

Go
2
star
14

homebrew-tap

Homebrew formula for installing am
Ruby
2
star
15

render-prometheus

Cooking ground for Prometheus render blueprint
Python
2
star
16

gettingstarted-am-rs

Getting started with Autometrics and the Rust Axum web framework
Rust
1
star
17

gettingstarted-am-ts

Sample app to get started with Autometrics in a backend TypeScript project
TypeScript
1
star
18

gettingstarted-am-py

Getting started with Autometrics and the Python FastAPI web framework
Python
1
star
19

.github

1
star
20

instrument-pipeline

Export job execution metrics to a Prometheus aggregation gateway
TypeScript
1
star