• Stars
    star
    380
  • Rank 112,766 (Top 3 %)
  • Language
    Go
  • License
    MIT License
  • Created over 8 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An implementation of Etsy's statsd in Go with tags support

gostatsd

Godoc Build Status Coverage Status GitHub tag Docker Pulls Docker Stars MicroBadger Layers Size Go Report Card license

An implementation of Etsy's statsd in Go, based on original code from @kisielk.

The project provides both a server called "gostatsd" which works much like Etsy's version, but also provides a library for developing customized servers.

Backends are pluggable and only need to support the backend interface.

Being written in Go, it is able to use all cores which makes it easy to scale up the server based on load.

Building the server

[](# Go version needs to be the same in: CI config, README, Dockerfiles, and Makefile) Gostatsd currently targets Go 1.20.6. If you are compiling from source, please ensure you are running this version.

From the gostatsd directory run make build. The binary will be built in build/bin/<arch>/gostatsd.

You will need to install the Golang build dependencies by running make setup in the gostatsd directory. This must be done before the first build, and again if the dependencies change. A protobuf installation is expected to be found in the tools/ directory. Managing this in a platform agnostic way is difficult, but PRs are welcome. Hopefully it will be sufficient to use the generated protobuf files in the majority of cases.

If you are unable to build gostatsd please check your Go version, and try running make setup again before reporting a bug.

Running the server

gostatsd --help gives a complete description of available options and their defaults. You can use make run to run the server with just the stdout backend to display info on screen.

You can also run through docker by running make run-docker which will use docker-compose to run gostatsd with a graphite backend and a grafana dashboard.

While not generally tested on Windows, it should work. Maximum throughput is likely to be better on a linux system, however.

The server listens for UDP packets by default. You can use unix sockets providing an absolute path to the socket in the metrics-addr configuration option. The socket mode used in this case is SOCK_DGRAM. Note that using unix sockets will only work on linux and that it will ignore conn-per-reader configuration option.

Configuring the server mode

The server can currently run in two modes: standalone and forwarder. It is configured through the top level server-mode configuration setting. The default is standalone.

In standalone mode, raw metrics are processed and aggregated as normal, and aggregated data is submitted to configured backends (see below)

This configuration mode allows the following configuration options:

  • expiry-interval: interval before metrics are expired, see Metric expiry and persistence section. Defaults to 5m. 0 to disable, -1 for immediate.
  • expiry-interval-counter: interval before counters are expired, defaults to the value of expiry-interval.
  • expiry-interval-gauge: interval before gauges are expired, defaults to the value of expiry-interval.
  • expiry-interval-set: interval before sets are expired, defaults to the value of expiry-interval.
  • expiry-interval-timer: interval before timers are expired, defaults to the value of expiry-interval.
  • flush-aligned: whether or not the flush should be aligned. Setting this will flush at an exact time interval. With a 10 second flush-interval, if the service happens to be started at 12:47:13, then flushing will occur at 12:47:20, 12:47:30, etc, rather than 12:47:23, 12:47:33, etc. This removes query time ambiguity in a multi-server environment. Defaults to false.
  • flush-interval: duration for how long to batch metrics before flushing. Should be an order of magnitude less than the upstream flush interval. Defaults to 1s.
  • flush-offset: offset for flush interval when flush alignment is enabled. For example, with an offset of 7s and an interval of 10s, it will flush at 12:47:10+7 = 12:47:17, etc.
  • ignore-host: indicates whether or not an explicit host field will be added to all incoming metrics and events. Defaults to false
  • max-readers: the number of UDP receivers to run. Defaults to 8 or the number of logical cores, whichever is less.
  • max-parsers: the number of workers available to parse metrics. Defaults to the number of logical cores.
  • max-workers: the number of aggregators to process metrics. Defaults to the number of logical cores.
  • max-queue-size: the size of the buffers between parsers and workers. Defaults to 10000, monitored via channel.* metric, with dispatch_aggregator_batch and dispatch_aggregator_map channels.
  • max-concurrent-events: the maximum number of concurrent events to be dispatching. Defaults to 1024, monitored via channel.* metric, with backend_events_sem channel.
  • estimated-tags: provides a hint to the system as to how many tags are expected to be seen on any particular metric, so that memory can be pre-allocated and reducing churn. Defaults to 4. Note: this is only a hint, and it is safe to send more.
  • log-raw-metric: logs raw metrics received from the network. Defaults to false.
  • metrics-addr: the address to listen to metrics on. Defaults to :8125. Using a file path instead of host:port will create a Unix Domain Socket in the specified path instead of using UDP.
  • namespace: a namespace to prefix all metrics with. Defaults to ''.
  • statser-type: configures where internal metrics are sent to. May be internal which sends them to the internal processing pipeline, logging which logs them, null which drops them. Defaults to internal, or null if the NewRelic backend is enabled.
  • percent-threshold: configures the "percentiles" sent on timers. Space separated string. Defaults to 90.
  • heartbeat-enabled: emits a metric named heartbeat every flush interval, tagged by version and commit. Defaults to false.
  • receive-batch-size: the number of datagrams to attempt to read. It is more CPU efficient to read multiple, however it takes extra memory. See [Memory allocation for read buffers] section below for details. Defaults to 50.
  • conn-per-reader: attempts to create a connection for every UDP receiver. Not supported by all OS versions. It will be ignored when unix sockets are used for the connection. Defaults to false.
  • bad-lines-per-minute: the number of metrics which fail to parse to log per minute. This is used to prevent a bad client spamming malformed statsd data, while still logging some information to enable troubleshooting. Defaults to 0.
  • hostname: sets the hostname on internal metrics
  • timer-histogram-limit: specifies the maximum number of buckets on histograms. See [Timer histograms] below.

In forwarder mode, raw metrics are collected from a frontend, and instead of being aggregated they are sent via http to another gostatsd server after passing through the processing pipeline (cloud provider, static tags, filtering, etc).

A forwarder server is intended to run on-host and collect metrics, forwarding them on to a central aggregation service. At present the central aggregation service can only scale vertically, but horizontal scaling through clustering is planned.

Aligned flushing is deliberately not supported in forwarder mode, as it would impact the central aggregation server due to all for forwarder nodes transmitting at once, and the expectation that many forwarding flushes will occur per central flush anyway.

Configuring forwarder mode requires a configuration file, with a section named http-transport. The raw version spoken is not configurable per server (see HTTP.md for version guarantees). The configuration section allows the following configuration options:

  • compress: boolean indicating if the payload should be compressed. Defaults to true
  • api-endpoint: configures the endpoint to submit raw metrics to. This setting should be just a base URL, for example https://statsd-aggregator.private, with no path. Required, no default
  • max-requests: maximum number of requests in flight. Defaults to 1000 (which is probably too high)
  • concurrent-merge: maximum number of concurrent goroutines allowed to merge metrics before forwarding. Defaults to 1 for backward-compatibility
  • max-request-elapsed-time: duration for the maximum amount of time to try submitting data before giving up. This includes retries. Defaults to 30s (which is probably too high). Setting this value to -1 will disable retries.
  • consolidator-slots: number of slots in the metric consolidator. Memory usage is a function of this. Lower values may cause blocking in the pipeline (back pressure). A UDP only receiver will never use more than the number of configured parsers (--max-parsers option). Defaults to the value of --max-parsers, but may require tuning for HTTP based servers.
  • transport: see TRANSPORT.md for how to configure the transport.
  • custom-headers : a map of strings that are added to each request sent to allow for additional network routing / request inspection. Not required, default is empty. Example: --custom-headers='{"region" : "us-east-1", "service" : "event-producer"}'
  • dynamic-headers : similar with custom-headers, but the header values are extracted from metric tags matching the provided list of string. Tag names are canonicalized by first replacing underscores with hyphens, then converting first letter and each letter after a hyphen to uppercase, the rest are converted to lower case. If a tag is specified in both custom-header and dynamic-header, the vaule set by custom-header takes precedence. Not required, default is empty. Example: --dynamic-headers='["region", "service"]'. This is an experimental feature and it may be removed or changed in future versions.

The following settings from the previous section are also supported:

  • expiry-*
  • ignore-host
  • max-readers
  • max-parsers
  • estimated-tags
  • log-raw-metric
  • metrics-addr
  • namespace
  • statser-type
  • heartbeat-enabled
  • receive-batch-size
  • conn-per-reader
  • bad-lines-per-minute
  • hostname
  • log-raw-metric

Metric expiry and persistence

After a metric has been sent to the server, the server will continue to send the metric to the configured backend until it expires, even if no additional metrics are sent from the client. The value sent depends on the metric type:

  • counter: sends 0 for both rate and count
  • gauge: sends the last received value.
  • set: sends 0
  • timer: sends non-percentile values of 0. Percentile values are not sent at all (see issue #135)

Setting an expiry interval of 0 will persist metrics forever. If metrics are not carefully controlled in such an environment, the server may run out of memory or overload the backend receiving the metrics. Setting a negative expiry interval will result in metrics not being persisted at all.

Each metric type has its own interval, which is configured using the following precedence (from highest to lowest): expiry-interval-<type> > expiry-interval > default (5 minutes).

Configuring HTTP servers

The service supports multiple HTTP servers, with different configurations for different requirements. All http servers are named in the top level http-servers setting. It should be a space separated list of names. Each server is then configured by creating a section in the configuration file named http.<servername>. An http server section has the following configuration options:

  • address: the address to bind to
  • enable-prof: boolean indicating if profiler endpoints should be enabled. Default false
  • enable-expvar: boolean indicating if expvar endpoints should be enabled. Default false
  • enable-ingestion: boolean indicating if ingestion should be enabled. Default false
  • enable-healthcheck: boolean indicating if healthchecks should be enabled. Default true

For example, to configure a server with a localhost only diagnostics endpoint, and a regular ingestion endpoint that can sit behind an ELB, the following configuration could be used:

backends='stdout'
http-servers='receiver profiler'

[http.receiver]
address='0.0.0.0:8080'
enable-ingestion=true

[http.profiler]
address='127.0.0.1:6060'
enable-expvar=true
enable-prof=true

There is no capability to run an https server at this point in time, and no auth (which is why you might want different addresses). You could also put a reverse proxy in front of the service. Documentation for the endpoints can be found under HTTP.md

Configuring backends

Refer to backends for configuration options for the backends.

Cloud providers

Cloud providers are a way to automatically enrich metrics with metadata from a cloud vendor.

Refer to cloud providers for configuration options for the cloud providers.

Configuring timer sub-metrics

By default, timer metrics will result in aggregated metrics of the form (exact name varies by backend):

<base>.Count
<base>.CountPerSecond
<base>.Mean
<base>.Median
<base>.Lower
<base>.Upper
<base>.StdDev
<base>.Sum
<base>.SumSquares

In addition, the following aggregated metrics will be emitted for each configured percentile:

<base>.Count_XX
<base>.Mean_XX
<base>.Sum_XX
<base>.SumSquares_XX
<base>.Upper_XX - for positive only
<base>.Lower_-XX - for negative only

These can be controlled through the disabled-sub-metrics configuration section:

[disabled-sub-metrics]
# Regular metrics
count=false
count-per-second=false
mean=false
median=false
lower=false
upper=false
stddev=false
sum=false
sum-squares=false

# Percentile metrics
count-pct=false
mean-pct=false
sum-pct=false
sum-squares-pct=false
lower-pct=false
upper-pct=false

By default (for compatibility), they are all false and the metrics will be emitted.

Timer histograms (experimental feature)

Timer histograms inspired by Prometheus implementation can be enabled on a per time series basis using gsd_histogram meta tag with value containing histogram bucketing definition (joined with _) e.g. gsd_histogram:-10_0_2.5_5_10_25_50.

It will:

  • output additional counter time series with name <base>.histogram and le tags specifying histogram buckets.
  • disable default sub-aggregations for timers e.g. <base>.Count, <base>.Mean, <base>.Upper, <base>.Upper_XX, etc.

For timer with gsd_histogram:-10_0_2.5_5_10_25_50 meta tag, following time series will be generated

  • <base>.histogram with tag le:-10
  • <base>.histogram with tag le:0
  • <base>.histogram with tag le:2.5
  • <base>.histogram with tag le:5
  • <base>.histogram with tag le:10
  • <base>.histogram with tag le:25
  • <base>.histogram with tag le:50
  • <base>.histogram with tag le:+Inf

Each time series will contain a total number of timer data points that had a value less or equal le value, e.g. counter <base>.histogram with the tag le:5 will contain the number of all observations that had a value not bigger than 5. Counter <base>.histogram with tag le:+Inf is equivalent to <base>.count and contains the total number.

All original timer tags are preserved and added to all the time series.

To limit cardinality, timer-histogram-limit option can be specified to limit the number of buckets that will be created (default is math.MaxUint32). Value of 0 won't disable the feature, 0 buckets will be emitted which effectively drops metrics with gsd_hostogram tags.

Incorrect meta tag values will be handled in best effort manner, i.e.

  • gsd_histogram:10__20_50 & gsd_histogram:10_incorrect_20_50 will generate le:10, le:20, le:50 and le:+Inf buckets
  • gsd_histogram:incorrect will result in only le:+Inf bucket

This is an experimental feature and it may be removed or changed in future versions.

Load testing

There is a tool under cmd/loader with support for a number of options which can be used to generate synthetic statsd load. There is also another load generation tool under cmd/tester which is deprecated and will be removed in a future release.

Help for the loader tool can be found through --help.

Sending metrics

The server listens for UDP packets on the address given by the --metrics-addr flag, aggregates them, then sends them to the backend servers given by the --backends flag (space separated list of backend names).

Currently supported backends are:

  • cloudwatch
  • datadog
  • graphite
  • influxdb
  • newrelic
  • statsdaemon
  • stdout

The format of each metric is:

<bucket name>:<value>|<type>\n
  • <bucket name> is a string like abc.def.g, just like a graphite bucket name
  • <value> is a string representation of a floating point number
  • <type> is one of c, g, or ms for "counter", "gauge", and "timer" respectively.

A single packet can contain multiple metrics, each ending with a newline.

Optionally, gostatsd supports sample rates (for simple counters, and for timer counters) and tags:

  • <bucket name>:<value>|c|@<sample rate>\n where sample rate is a float between 0 and 1
  • <bucket name>:<value>|c|@<sample rate>|#<tags>\n where tags is a comma separated list of tags
  • <bucket name>:<value>|<type>|#<tags>\n where tags is a comma separated list of tags

Tags format is: simple or key:value.

A simple way to test your installation or send metrics from a script is to use echo and the netcat utility nc:

echo 'abc.def.g:10|c' | nc -w1 -u localhost 8125

Monitoring

Many metrics for the internal processes are emitted. See METRICS.md for details. Go expvar is also exposed if the --profile flag is used.

Memory allocation for read buffers

By default gostatsd will batch read multiple packets to optimise read performance. The amount of memory allocated for these read buffers is determined by the config options:

max-readers * receive-batch-size * 64KB (max packet size)

The metric avg_packets_in_batch can be used to track the average number of datagrams received per batch, and the --receive-batch-size flag used to tune it. There may be some benefit to tuning the --max-readers flag as well.

Using the library

In your source code:

import "github.com/atlassian/gostatsd/pkg/statsd"

Note that this project uses Go modules for dependency management.

Documentation can be found via go doc github.com/atlassian/gostatsd/pkg/statsd or at https://godoc.org/github.com/atlassian/gostatsd/pkg/statsd

Versioning

Gostatsd uses semver versioning for both API and configuration settings, however it does not use it for packages.

This is due to gostatsd being an application first and a library second. Breaking API changes occur regularly, and the overhead of managing this is too burdensome.

Contributors

Pull requests, issues and comments welcome. For pull requests:

  • Add tests for new features and bug fixes
  • Follow the existing style
  • Separate unrelated changes into multiple pull requests

See the existing issues for things to start contributing.

For bigger changes, make sure you start a discussion first by creating an issue and explaining the intended change.

Atlassian requires contributors to sign a Contributor License Agreement, known as a CLA. This serves as a record stating that the contributor is entitled to contribute the code/documentation/translation to the project and is willing to have it used in distributions and derivative works (or is willing to transfer ownership).

Prior to accepting your contributions we ask that you please follow the appropriate link below to digitally sign the CLA. The Corporate CLA is for those who are contributing as a member of an organization and the individual CLA is for those contributing as an individual.

License

Copyright (c) 2012 Kamil Kisiel. Copyright @ 2016-2020 Atlassian Pty Ltd and others.

Licensed under the MIT license. See LICENSE file.

More Repositories

1

react-beautiful-dnd

Beautiful and accessible drag and drop for lists with React
JavaScript
33,330
star
2

pragmatic-drag-and-drop

Fast drag and drop for any experience on any tech stack
TypeScript
9,523
star
3

jest-in-case

Jest utility for creating variations of the same test
JavaScript
1,056
star
4

react-sweet-state

Shared state management solution for React
JavaScript
870
star
5

escalator

Escalator is a batch or job optimized horizontal autoscaler for Kubernetes
Go
663
star
6

github-for-jira

Connect your code with your project management in Jira
TypeScript
626
star
7

prosemirror-utils

βš’ Utils library for ProseMirror
TypeScript
479
star
8

nucleus

A configurable and versatile update server for all your Electron apps
TypeScript
396
star
9

docker-chromium-xvfb

Docker image for running browser tests against headless Chromium
Dockerfile
385
star
10

smith

Smith is a Kubernetes workflow engine / resource manager
Go
287
star
11

babel-plugin-react-flow-props-to-prop-types

Convert Flow React props annotation to PropTypes
JavaScript
234
star
12

better-ajv-errors

JSON Schema validation for Human πŸ‘¨β€πŸŽ€
JavaScript
233
star
13

browser-interaction-time

⏰ A JavaScript library (written in TypeScript) to measure the time a user is active on a website
TypeScript
217
star
14

gajira

GitHub Actions for Jira
199
star
15

extract-react-types

One stop shop for documenting your react components.
JavaScript
179
star
16

stricter

A project-wide js-linting tool
TypeScript
157
star
17

data-center-helm-charts

Helm charts for Atlassian's Data Center products
Java
155
star
18

bazel-tools

Reusable bits for Bazel
Starlark
113
star
19

gajira-login

Jira Login GitHub Action
JavaScript
98
star
20

terraform-provider-artifactory

Terraform provider to manage Artifactory
Go
89
star
21

build-stats

πŸ† get the build stats for pipelines πŸ†
TypeScript
81
star
22

dc-app-performance-toolkit

Atlassian Data Center App Performance Toolkit
Python
75
star
23

kubetoken

Kubetoken
Go
74
star
24

koa-oas3

Request and response validator for Koa using Open API Specification
TypeScript
73
star
25

1time

Lightweight, thread-safe Java/Kotlin TOTP (time-based one-time passwords) and HOTP generator and validator for multi-factor authentication valid for both prover and verifier based on shared secret
Kotlin
68
star
26

gajira-transition

JavaScript
59
star
27

gajira-create

JavaScript
58
star
28

sketch-plugin

Design your next Atlassian app with our component libraries and suite of Sketch tools πŸ’Ž
JavaScript
57
star
29

go-sentry-api

A go client for the sentry api https://sentry.io/api/
Go
50
star
30

themis

Autoscaling EMR clusters and Kinesis streams on Amazon Web Services (AWS)
JavaScript
48
star
31

gajira-todo

JavaScript
46
star
32

jira-cloud-for-sketch

A Sketch plugin providing integration with JIRA Cloud
JavaScript
45
star
33

gajira-find-issue-key

JavaScript
43
star
34

oas3-chow-chow

Request and response validator against OpenAPI Specification 3
TypeScript
42
star
35

validate-npm-package

Validate a package.json file
JavaScript
38
star
36

gajira-cli

JavaScript
38
star
37

conartist

Scaffold out and keep all your files in sync over time. Code-shifts for your file system.
JavaScript
34
star
38

gajira-comment

JavaScript
33
star
39

jira-github-connector-plugin

This project has been superseded by the JIRA DVCS Connector
JavaScript
30
star
40

voyager

Voyager PaaS
Go
29
star
41

atlaskit-framerx

[Unofficial] Atlaskit for Framer X (experimental)
TypeScript
28
star
42

jira-actions

Kotlin
27
star
43

sourcemap

Java
24
star
44

asap-authentication-python

This package provides a python implementation of the Atlassian Service to Service Authentication specification.
Python
23
star
45

go-artifactory

Go library for artifactory REST API
Go
23
star
46

vscode-extension-jira-frontend

JavaScript
18
star
47

ssh

Kotlin
16
star
48

jira-performance-tests

Kotlin
16
star
49

homebrew-tap

This repository contains a collection of Homebrew (aka, Brew) "formulae" for Atlassian
Ruby
16
star
50

atlassian-connect-example-app-node

TypeScript
15
star
51

docker-fluentd

Docker image for fluentd with support for both elasticsearch and kinesis
Makefile
11
star
52

infrastructure

Kotlin
11
star
53

omniauth-jira

OmniAuth strategy for JIRA
Ruby
11
star
54

jenkins-for-jira

Connect your Jenkins server to Jira Software Cloud for more visibility into your development pipeline
TypeScript
11
star
55

redis-dump-restore

Node.js library to dump and restore Redis.
JavaScript
10
star
56

fluent-plugin-kinesis-aggregation

fluent kinesis plugin shipping KPL aggregation format records, based on https://github.com/awslabs/aws-fluent-plugin-kinesis
Ruby
10
star
57

rocker

Little text UI for docker
Rust
9
star
58

hubot-stride

JavaScript
9
star
59

graphql-braid

9
star
60

copy-pkg

Copy a package.json with filters and normalization
JavaScript
8
star
61

gray-matter-loader

Webpack loader for extracting front-matter using gray-matter - https://www.npmjs.com/package/gray-matter
JavaScript
8
star
62

jsm-integration-scripts

Jira Service Management Integration Scripts
Python
8
star
63

autoconvert

TinyMCE plugin for Atlassian Autoconvert
JavaScript
8
star
64

less-plugin-inline-svg

A Less plugin that allows to inline SVG file and customize its CSS styles
JavaScript
7
star
65

aws-infrastructure

Kotlin
7
star
66

jira-hardware-exploration

Kotlin
6
star
67

report

HTML
6
star
68

virtual-users

Kotlin
6
star
69

docker-infrastructure

Kotlin
6
star
70

jobsite

Tools for working with workspaces as defined by Yarn, Lerna, Bolt, etc.
JavaScript
5
star
71

git-lob

Experimental large files in Git (discontinued, use git-lfs instead)
Go
5
star
72

ansible-ixgbevf

4
star
73

concurrency

Kotlin
4
star
74

jvm-tasks

Kotlin
4
star
75

ssh-ubuntu

Kotlin
4
star
76

jpt-example-btf

Java
3
star
77

gojiid

A Goji Middleware For adding Request Id to Context
Go
3
star
78

jira-software-actions

Kotlin
3
star
79

workspace

Kotlin
2
star
80

putty-sourcetree-fork

A fork of PuTTY used by Sourcetree
C
2
star
81

atlassian-connect-example-app-python

Python
2
star
82

atlassian-connect-example-app-java

Java
2
star
83

jec

JEC Client source codes and installation packages
Go
2
star
84

nadel-graphql-gateway-demo

Nadel GraphQL Gateway Demo app
HTML
1
star
85

parcel-stress-test

JavaScript
1
star
86

homebrew-bitbucket

A collection of pinned versions of dependencies for Bitbucket
Ruby
1
star
87

frontend-guides

1
star
88

tangerine-state-viewer

Visual Studio Code extension to facilitate tangerine state navigation
TypeScript
1
star
89

uniql-es

JavaScript
1
star
90

jasmine-http-server-spy

Creates jasmine spy objects backed by a http server.
CoffeeScript
1
star
91

packit-cli

CLI tool for creating package based architecture for enterprise frontend applications.
JavaScript
1
star
92

fluent-plugin-statsd_event

Fluentd plugin for sendind events to a statsd service
Ruby
1
star
93

github-packages-test

Test repo to verify artifact delivery pipeline
Kotlin
1
star
94

org.eclipse.jgit-atlassian

Java
1
star
95

jces-1209

Benchmark for Cloud and DC
Kotlin
1
star
96

webvieweventtest

TypeScript
1
star
97

quick-303

Cloud vs DC
Kotlin
1
star