• Stars
    star
    302
  • Rank 133,424 (Top 3 %)
  • Language Roff
  • License
    MIT License
  • Created almost 10 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Datadog Agent Dockerfile for Trusted Builds.

Datadog Agent 5.x Dockerfile

This repository is meant to build the base image for a Datadog Agent 5.x container. You will have to use the resulting image to configure and run the Agent. If you are looking for a Datadog Agent 6.x Dockerfile, it is available in the datadog-agent repo.

Quick Start

The default image is ready-to-go. You just need to set your API_KEY in the environment.

docker run -d --name dd-agent \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /proc/:/host/proc/:ro \
  -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
  -e API_KEY={your_api_key_here} \
  -e SD_BACKEND=docker \
  -e NON_LOCAL_TRAFFIC=false \
  datadog/docker-dd-agent:latest

If you are running on Amazon Linux with version < 2, use the following instead:

docker run -d --name dd-agent \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /proc/:/host/proc/:ro \
  -v /cgroup/:/host/sys/fs/cgroup:ro \
  -e API_KEY={your_api_key_here} \
  -e SD_BACKEND=docker \
  -e NON_LOCAL_TRAFFIC=false \
  datadog/docker-dd-agent:latest

Configuration

Hostname

By default the agent container will use the Name field found in the docker info command from the host as a hostname. To change this behavior you can update the hostname field in /etc/dd-agent/datadog.conf. The easiest way for this is to use the DD_HOSTNAME environment variable (see below).

CGroups

For the Docker check to succeed, memory management by cgroup must be enabled on the host as explained in the debian wiki. On Debian Jessie or later for example you will need to add cgroup_enable=memory swapaccount=1 to your boot options, otherwise the agent won't be able to recognize your system. See this thread for details.

Autodiscovery

The commands in the Quick Start section enable Autodiscovery in auto-conf mode, meaning the Agent will automatically run checks against any containers running images listed in the default check templates.

To learn more about Autodiscovery, read the Autodiscovery guide on the Datadog Docs site. To disable it, omit the SD_BACKEND environment variable when starting docker-dd-agent.

Environment variables

Some configuration parameters can be changed with environment variables:

  • DD_HOSTNAME set the hostname (write it in datadog.conf)

  • TAGS set host tags. Add -e TAGS=simple-tag-0,tag-key-1:tag-value-1 to use [simple-tag-0, tag-key-1:tag-value-1] as host tags.

  • EC2_TAGS set EC2 host tags. Add -e EC2_TAGS=yes to use EC2 custom host tags. Requires an IAM role associated with the instance.

  • LOG_LEVEL set logging verbosity (CRITICAL, ERROR, WARNING, INFO, DEBUG). Add -e LOG_LEVEL=DEBUG to turn logs to debug mode.

  • DD_LOGS_STDOUT: set it to yes to send all logs to stdout and stderr, for them to be processed by Docker.

  • PROXY_HOST, PROXY_PORT, PROXY_USER and PROXY_PASSWORD set the proxy configuration.

  • DD_URL set the Datadog intake server to send Agent data to (used when using an agent as a proxy )

  • NON_LOCAL_TRAFFIC configures the non_local_traffic option in the agent which enables or disables statsd reporting from any external ip. You may find this useful to report metrics from your other containers. See network configuration for more details. This option is set to true by default in the image, and the docker run command we provide in the example above disables it. Remove the -e NON_LOCAL_TRAFFIC=false part to enable it back. WARNING if you allow non-local traffic, make sure your agent container is not accessible from the Internet or other untrusted networks as it would allow anyone to submit metrics to it.

  • SD_BACKEND, SD_CONFIG_BACKEND, SD_BACKEND_HOST, SD_BACKEND_PORT, SD_TEMPLATE_DIR, SD_CONSUL_TOKEN, SD_BACKEND_USER and SD_BACKEND_PASSWORD configure Autodiscovery (previously known as Service Discovery):

    • SD_BACKEND: set to docker (the only supported backend) to enable Autodiscovery.
    • SD_CONFIG_BACKEND: set to etcd, consul, or zk to use one of these key-value stores as a template source.
    • SD_BACKEND_HOST and SD_BACKEND_PORT: configure the connection to the key-value template source.
    • SD_TEMPLATE_DIR: when using SD_CONFIG_BACKEND, set the path where the check configuration templates are located in the key-value store (default is datadog/check_configs)
    • SD_CONSUL_TOKEN: when using Consul as a template source and the Consul cluster requires authentication, set a token so the Datadog Agent can connect.
    • SD_BACKEND_USER and SD_BACKEND_PASSWORD: when using etcd as a template source and it requires authentication, set a user and password so the Datadog Agent can connect.
  • DD_APM_ENABLED run the trace-agent along with the infrastructure agent, allowing the container to accept traces on 8126/tcp (This option is NOT available on Alpine Images)

  • DD_PROCESS_AGENT_ENABLED run the process-agent along with the infrastructure agent, feeding data to the Live Process View and Live Containers View (This option is NOT available on Alpine Images)

  • DD_COLLECT_LABELS_AS_TAGS Enables the collection of the listed labels as tags. Comma separated string, without spaces unless in quotes. Exemple: -e DD_COLLECT_LABELS_AS_TAGS='com.docker.label.foo, com.docker.label.bar' or -e DD_COLLECT_LABELS_AS_TAGS=com.docker.label.foo,com.docker.label.bar.

  • MAX_TRACES_PER_SECOND: Specifies the maximum number of traces per second to sample for APM. Set to 0 to disable this limit.

  • DD_HISTOGRAM_PERCENTILES: histogram percentiles to compute, separated by commas. The default is "0.95"

  • DD_HISTOGRAM_AGGREGATES: histogram aggregates to compute, separated by commas. The default is "max, median, avg, count"

Note: Some of those have alternative names, but with the same impact: it is possible to use DD_TAGS instead of TAGS, DD_LOG_LEVEL instead of LOG_LEVEL and DD_API_KEY instead of API_KEY.

Enabling integrations

Environment variables

It is possible to enable some checks through the environment:

  • KUBERNETES enables the kubernetes check if set (KUBERNETES=yes works)
  • to collect the kubernetes events, you can set KUBERNETES_COLLECT_EVENTS to true on one agent per cluster. Alternatively, you can enable the leader election mechanism by setting KUBERNETES_LEADER_CANDIDATE to true on candidate agents, and adjust the lease time (in seconds) with the KUBERNETES_LEADER_LEASE_DURATION variable.
  • by default, only events from the default namespace are collected. To change what namespaces are used, set the KUBERNETES_NAMESPACE_NAME_REGEX regexp to a valid regexp matching your relevant namespaces.
  • to collect the kube_service tags, the agent needs to query the apiserver's events and services endpoints. If you need to disable that, you can pass KUBERNETES_COLLECT_SERVICE_TAGS=false.
  • the kubelet API endpoint is assumed to be the default route of the container, you can override the kubelet API endpoint by specifying KUBERNETES_KUBELET_HOST (eg. when using CNI networking, the kubelet API may not listen on the default route address)
  • MESOS_MASTER and MESOS_SLAVE respectively enable the mesos master and mesos slave checks if set (MESOS_MASTER=yes works).
  • MARATHON_URL if set will be used to enable the Marathon check that will query the URL passed in this variable for metrics. It can usually be set to http://leader.mesos:8080.

Autodiscovery

Another way to enable checks is through Autodiscovery. This is particularly useful in dynamic environments like Kubernetes, Amazon ECS, or Docker Swarm. Read more about Autodiscovery on the Datadog Docs site.

Configuration files

You can also mount YAML configuration files in the /conf.d folder, they will automatically be copied to /etc/dd-agent/conf.d/ when the container starts. The same can be done for the /checks.d folder. Any Python files in the /checks.d folder will automatically be copied to /etc/dd-agent/checks.d/ when the container starts.

  1. Create a configuration folder on the host and write your YAML files in it. The examples below can be used for the /checks.d folder as well.

    mkdir /opt/dd-agent-conf.d
    touch /opt/dd-agent-conf.d/nginx.yaml
    
  2. When creating the container, mount this new folder to /conf.d.

    docker run -d --name dd-agent \
      -v /var/run/docker.sock:/var/run/docker.sock:ro \
      -v /proc/:/host/proc/:ro \
      -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
      -v /opt/dd-agent-conf.d:/conf.d:ro \
      -e API_KEY={your_api_key_here} \
      datadog/docker-dd-agent
    

    The important part here is -v /opt/dd-agent-conf.d:/conf.d:ro

Now when the container starts, all files in /opt/dd-agent-conf.d with a .yaml extension will be copied to /etc/dd-agent/conf.d/. Please note that to add new files you will need to restart the container.

JMX Images

If you need to run any JMX-based Agent checks, run a JMX image, e.g. datadog/docker-dd-agent:latest-jmx, datadog/docker-dd-agent:11.0.5150-jmx, etc. These images are based on the default images but add a JVM, which is needed for the Agent to run jmxfetch.

DogStatsD

Standalone DogStatsD

The default images (e.g. latest) run a DogStatsD server as well as the main Agent (i.e. the collector). If you want to run DogStatsD only, run a DogStatsD-only image, e.g. datadog/docker-dd-agent:latest-dogstatsd, datadog/docker-dd-agent:11.0.5141-dogstatsd-alpine, etc. These images don't run the collector process.

They also run the DogStatsD server as a non-root user, which is useful for platforms like OpenShift. They also don't need shared volumes from the host (/proc, /sys/fs and the Docker socket) like the default Agent image.

Note: Metrics submitted by this container will NOT get tagged with any global tags specified in datadog.conf. These tags are only read by the Agent's collector process, which these DogStatsD-only images do not run.

Note: Optionally, these images can run the the trace-agent process. Pass -e DD_APM_ENABLED=true to your docker run command to activate the trace-agent and allow your container to receive traces from Datadog's APM client libraries.

DogStatsD from the host

DogStatsD can be available on port 8125 from anywhere by adding the option -p 8125:8125/udp to the docker run command.

To make it available from your host only, use -p 127.0.0.1:8125:8125/udp instead.

Disable dogstatsd

DogStatsd can be disabled by setting USE_DOGSTATSD to no

DogStatsD from other containers

Using Docker host IP

Since the Agent container port 8125 should be linked to the host directly, you can connect to DogStatsD through the host. Usually the IP address of the host in a Docker container can be determined by looking at the address of the default route of this container with ip route for example. You can then configure your DogStatsD client to connect to 172.17.42.1:8125 for example.

Using Docker links (Legacy)

To send data to DogStatsD from other containers, add a --link dogstatsd:dogstatsd option to your run command.

For example, run a container my_container with the image my_image.

docker run  --name my_container           \
            --all_your_flags              \
            --link dogstatsd:dogstatsd    \
            my_image

DogStatsD address and port will be available in my_container's environment variables DOGSTATSD_PORT_8125_UDP_ADDR and DOGSTATSD_PORT_8125_UDP_PORT.

Tracing + APM

Enable the datadog-trace-agent in the docker-dd-agent container by passing DD_APM_ENABLED=true as an environment variable

Note: APM is NOT available on Alpine Images

Tracing from the host

Tracing can be available on port 8126/tcp from anywhere by adding the options -p 8126:8126/tcp to the docker run command

To make it available from your host only, use -p 127.0.0.1:8126:8126/tcp instead.

For example, the following command will allow the agent to receive traces from anywhere

docker run -d --name dd-agent \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /proc/:/host/proc/:ro \
  -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
  -e API_KEY={your_api_key_here} \
  -e DD_APM_ENABLED=true \
  -p 8126:8126/tcp \
  datadog/docker-dd-agent

Previous instructions required binding to port 7777. This is a legacy port used by former client libraries and has been replaced by 8126.

Tracing from other containers

As with DogStatsD, traces can be submitted to the agent from other containers either using the Docker host IP or with Docker links

Using Docker links

docker run  --name my_container           \
            --all_your_flags              \
            --link dd-agent:dd-agent    \
            my_image

will expose DD_AGENT_PORT_8126_TCP_ADDR and DD_AGENT_PORT_8126_TCP_PORT as environment variables. Your application tracer can be configured to submit to this address.

An example in Python:

import os
from ddtrace import tracer
tracer.configure(
    hostname=os.environ["DD_AGENT_PORT_8126_TCP_ADDR"],
    port=os.environ["DD_AGENT_PORT_8126_TCP_PORT"]
)

Using Docker host IP

Agent container port 8126 should be linked to the host directly, Having determined the address of the default route of this container, with ip route for example, you can configure your application tracer to report to it.

An example in python, assuming 172.17.0.1 is the default route:

from ddtrace import tracer; tracer.configure(hostname="172.17.0.1", port=8126)

Build an image

To configure specific settings of the agent directly in the image, you may need to build a Docker image on top of ours.

  1. Create a Dockerfile to set your specific configuration or to install dependencies.

    FROM datadog/docker-dd-agent
    # Example: MySQL
    ADD conf.d/mysql.yaml /etc/dd-agent/conf.d/mysql.yaml
    
  2. Build it.

    docker build -t dd-agent-image .

  3. Then run it like the datadog/docker-dd-agent image.

    docker run -d --name dd-agent \
      -v /var/run/docker.sock:/var/run/docker.sock:ro \
      -v /proc/:/host/proc/:ro \
      -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
      -e API_KEY={your_api_key_here} \
      dd-agent-image
    
  4. It's done!

You can find some examples in our Github repository.

Alpine-based image

Starting from Agent 5.7 we also provide an image based on Alpine Linux. This image is smaller (about 60% the size of the Debian based one), and benefits from Alpine's security-oriented design. It is compatible with all options described in this file (Autodiscovery, enabling specific integrations, etc.) with the exception of JMX and Tracing (the trace-agent does not ship with the Alpine images).

This image is available under tags with the following naming convention usual_tag_name-alpine. So for example to use the latest tag: datadog/docker-dd-agent:latest-alpine must be pulled. To use a specific version number, specify 11.2.583-alpine.

The Alpine version can be used this way:

```
docker run -d --name dd-agent \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /proc/:/host/proc/:ro \
  -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
  -e API_KEY={your_api_key_here} \
  datadog/docker-dd-agent:latest-alpine
```

Note: In this version, check configuration files must be stored in /opt/datadog-agent/agent/conf.d/ instead of /etc/dd-agent/conf.d/.

Warning: This version is recent, and its behaviour may differ a little (namely, it is running a source-installed agent so commands need to be adapted). If you find a bug, don't hesitate to file an issue, feedback around it is appreciated.

Versioning pattern

The docker image is following a versioning pattern that allows us to release changes to the Docker image of the Datadog Agent but with the same version of the Agent.

The Docker image version follows the following pattern:

X.Y.Z where X is the major version of the Docker Image, Y is the minor version, Z will represent the Agent version.

e.g. the first version of the Docker image that bundled the Datadog Agent 5.5.0 was:

10.0.550

Information

To display information about the Agent's state with this command.

debian:

docker exec dd-agent service datadog-agent info

alpine:

docker exec dd-agent /opt/datadog-agent/bin/agent info

Warning: the docker exec command is available only with Docker 1.3 and above.

Logs

Copy logs from the container to the host

That's the simplest solution. It imports container's log to one's host directory.

docker cp dd-agent:/var/log/datadog /tmp/log-datadog-agent

Supervisor logs

Basic information about the Agent execution are available through the logs command.

docker logs dd-agent

Exec a shell on the container and tail logs (collector.log, forwarder.log and jmxfetch.log) for debugging. The supervisor.log is available there as well but you can get that from docker logs dd-agent from the host.

alpine:

$ docker exec -it dd-agent ash
/opt/datadog-agent # tail -f /opt/datadog-agent/logs/dogstatsd.log
2016-07-22 23:09:09 | INFO | dd.dogstatsd | dogstatsd(dogstatsd.py:210) | Flush #8: flushed 1 metric, 0 events, and 0 service check runs

debian:

$ docker exec -it dd-agent bash
# tail -f /var/log/datadog/dogstatsd.log
2016-07-22 23:09:09 | INFO | dd.dogstatsd | dogstatsd(dogstatsd.py:210) | Flush #8: flushed 1 metric, 0 events, and 0 service check runs

Limitations

The Agent won't be able to collect disk metrics from volumes that are not mounted to the Agent container. If you want to monitor additional partitions, make sure to share them to the container in your docker run command (e.g. -v /data:/data:ro)

Docker isolates containers from the host. As a result, the Agent won't have access to all host metrics.

Known missing/incorrect metrics:

  • Network
  • Process list

Also, several integrations might be incomplete. See the "Contribute" section.

Contribute

If you notice a limitation or a bug with this container, feel free to open a Github issue. If it concerns the Agent itself, please refer to its documentation or its wiki.

More Repositories

1

go-profiler-notes

felixge's notes on the various go profiling methods that are available.
Jupyter Notebook
3,255
star
2

glommio

Glommio is a thread-per-core crate that makes writing highly parallel asynchronous applications in a thread-per-core architecture easier for rustaceans.
Rust
2,871
star
3

datadog-agent

Main repository for Datadog Agent
Go
2,640
star
4

stratus-red-team

☁️ ⚡ Granular, Actionable Adversary Emulation for the Cloud
Go
1,600
star
5

dd-agent

Datadog Agent Version 5
Python
1,291
star
6

integrations-core

Core integrations of the Datadog Agent
Python
856
star
7

zstd

Zstd wrapper for Go
C
712
star
8

the-monitor

Markdown files for Datadog's longform blog posts: https://www.datadoghq.com/blog/
Python
608
star
9

dd-trace-js

JavaScript APM Tracer
JavaScript
601
star
10

datadogpy

The Datadog Python library
Python
575
star
11

dd-trace-go

Datadog Go Library including APM tracing, profiling, and security monitoring.
Go
545
star
12

dd-trace-java

Datadog APM client for Java
Java
500
star
13

dd-trace-py

Datadog Python APM Client
Python
498
star
14

yubikey

YubiKey at Datadog
Shell
493
star
15

guarddog

🐍 🔍 GuardDog is a CLI tool to Identify malicious PyPI and npm packages
Python
478
star
16

kafka-kit

Kafka storage rebalancing, automated replication throttle, cluster API and more
Go
470
star
17

dd-trace-php

Datadog PHP Clients
PHP
468
star
18

documentation

The source for Datadog's documentation site.
JavaScript
408
star
19

dd-trace-dotnet

.NET Client Library for Datadog APM
C#
403
star
20

security-labs-pocs

Proof of concept code for Datadog Security Labs referenced exploits.
Shell
355
star
21

go-python3

Go bindings to the CPython-3 API
Go
344
star
22

datadog-go

go dogstatsd client library for datadog
Go
332
star
23

terraform-provider-datadog

Terraform Datadog provider
Go
329
star
24

datadog-serverless-functions

Repo of AWS Lambda and Azure Functions functions that process streams and send data to Datadog
Python
326
star
25

helm-charts

Helm charts for Datadog products
Go
320
star
26

ansible-datadog

Ansible role for Datadog Agent
Jinja
288
star
27

datadog-operator

Datadog Agent Kubernetes Operator
Go
282
star
28

browser-sdk

Datadog Browser SDK
TypeScript
265
star
29

dd-trace-rb

Datadog Tracing Ruby Client
Ruby
261
star
30

threatest

Threatest is a CLI and Go framework for end-to-end testing threat detection rules.
Go
260
star
31

integrations-extras

Community developed integrations and plugins for the Datadog Agent.
Python
238
star
32

watermarkpodautoscaler

Custom controller that extends the Horizontal Pod Autoscaler
Go
204
star
33

pupernetes

Spin up a full fledged Kubernetes environment designed for local development & CI
Go
200
star
34

Miscellany

Miscellaneous scripts and tools
Python
197
star
35

php-datadogstatsd

A PHP client for DogStatsd
PHP
185
star
36

dd-sdk-ios

Datadog SDK for iOS - Swift and Objective-C.
Swift
172
star
37

java-dogstatsd-client

Java statsd client library
Java
170
star
38

dogstatsd-ruby

A Ruby client for DogStatsd
Ruby
166
star
39

sketches-go

Go implementations of the distributed quantile sketch algorithm DDSketch
Go
142
star
40

chaos-controller

🐒 🔥 Datadog Failure Injection System for Kubernetes
C
142
star
41

dd-sdk-android

Datadog SDK for Android (Compatible with Kotlin and Java)
Kotlin
137
star
42

kvexpress

Go program to move data in and out of Consul's KV store.
Go
128
star
43

HASH

HASH (HTTP Agnostic Software Honeypot)
JavaScript
119
star
44

docker-compose-example

A working example of using Docker Compose with Datadog
Python
116
star
45

trace-examples

trace sample apps
Python
113
star
46

sketches-java

DDSketch: A Fast and Fully-Mergeable Quantile Sketch with Relative-Error Guarantees.
Java
108
star
47

ebpf-manager

This manager helps handle the life cycle of your eBPF programs
Go
106
star
48

dd-sdk-reactnative

Datadog SDK for ReactNative
TypeScript
105
star
49

gohai

System information collector
Go
102
star
50

datadog-lambda-js

The Datadog AWS Lambda Library for Node
TypeScript
101
star
51

chef-datadog

Chef cookbook for Datadog Agent & Integrations
Ruby
97
star
52

piecewise

Functions for piecewise regression on time series data
Python
96
star
53

malicious-software-packages-dataset

An open-source dataset of malicious software packages found in the wild, 100% vetted by humans.
Python
96
star
54

datadog-api-client-go

Golang client for the Datadog API
Go
95
star
55

jmxfetch

Export JMX metrics
Java
95
star
56

dogstatsd-csharp-client

A DogStatsD client for C#/.NET
C#
94
star
57

gostackparse

Package gostackparse parses goroutines stack traces as produced by panic() or debug.Stack() at ~300 MiB/s.
Go
94
star
58

ansible-datadog-callback

Ansible callback to get stats & events directly into Datadog http://datadoghq.com
Python
93
star
59

dogapi-rb

Ruby client for Datadog's API
Ruby
92
star
60

extendeddaemonset

Kubernetes Extended Daemonset controller
Go
92
star
61

redux-doghouse

Scoping helpers for building reusable components with Redux
JavaScript
90
star
62

build-plugin

Track your build performances like never before.
TypeScript
88
star
63

serverless-plugin-datadog

Serverless plugin to automagically instrument your Lambda functions with Datadog
TypeScript
87
star
64

ecommerce-workshop

Example eCommerce App for workshops and observability
Ruby
86
star
65

datadog-ci

Use Datadog from your CI.
TypeScript
85
star
66

ebpfbench

profile eBPF programs from Go
Go
83
star
67

sketches-py

Python implementations of the distributed quantile sketch algorithm DDSketch
Python
77
star
68

dirtypipe-container-breakout-poc

Container Excape PoC for CVE-2022-0847 "DirtyPipe"
77
star
69

datadog-lambda-python

The Datadog AWS Lambda Layer for Python
Python
76
star
70

ddqa

Datadog's QA manager for releases of GitHub repositories
Python
72
star
71

datadog-trace-agent

Datadog Trace Agent archive (pre-6.10.0)
70
star
72

datadog-api-client-typescript

Typescript client for the Datadog API
TypeScript
69
star
73

heroku-buildpack-datadog

Heroku Buildpack to run the Datadog Agent in a Dyno
Shell
69
star
74

datadog-api-client-python

Python client for the Datadog API
Python
68
star
75

datadog-static-analyzer

Datadog Static Analyzer
Rust
64
star
76

orchestrion

A tool for adding instrumentation to Go code
Go
61
star
77

managed-kubernetes-auditing-toolkit

All-in-one auditing toolkit for identifying common security issues in managed Kubernetes environments. Currently supports AWS EKS.
Go
60
star
78

lading

A suite of data generation and load testing tools
Rust
60
star
79

datadog-lambda-extension

Rust
60
star
80

jsonapi

A marshaler/unmarshaler for JSON:API.
Go
59
star
81

datadog-cdk-constructs

CDK construct library to automagically instrument your Lambda functions with Datadog
TypeScript
58
star
82

datadog-lambda-go

The Datadog AWS Lambda package for Go
Go
57
star
83

serilog-sinks-datadog-logs

Serilog Sink that sends log events to Datadog https://www.datadoghq.com/
C#
53
star
84

puppet-datadog-agent

Puppet module to install the Datadog agent
Ruby
50
star
85

datadog-api-client-java

Java client for the Datadog API
Java
48
star
86

opencensus-go-exporter-datadog

Datadog exporter for OpenCensus metrics
Go
47
star
87

gello

:octocat: A self-hosted server for managing Trello cards based on GitHub webhook events
Python
45
star
88

datadog-cloudformation-resources

Python
44
star
89

ebpf-training

Go
44
star
90

jenkins-datadog-plugin

ARCHIVED: Current repository is now located https://github.com/jenkinsci/datadog-plugin
Java
42
star
91

effective-dashboards

A curated list of useful Datadog dashboards and Dashboard design best practices
40
star
92

dd-sdk-flutter

Flutter bindings and tools for utilizing Datadog Mobile SDKs
Dart
40
star
93

dd-opentracing-cpp

Datadog Opentracing C++ Client
C++
40
star
94

synthetics-ci-github-action

Use Browser and API tests in your CI/CD with Datadog Continuous Testing
TypeScript
40
star
95

rum-react-integration-examples

rum-react-integration
TypeScript
39
star
96

fluent-plugin-datadog

Fluentd output plugin for Datadog: https://www.datadog.com
Ruby
38
star
97

ddprof

The Datadog Native Profiler for Linux
C++
35
star
98

apigentools

Generate API clients with ease
Python
32
star
99

import-in-the-middle

Like `require-in-the-middle`, but for ESM import
JavaScript
32
star
100

brod

An unmaintained python client to Kafka 0.6
Python
31
star