• This repository has been archived on 27/Sep/2023
  • Stars
    star
    259
  • Rank 157,669 (Top 4 %)
  • Language
    Go
  • License
    MIT License
  • Created almost 9 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Gossip-based service discovery. Docker native, but supports static discovery, too.

Sidecar Sidecar

The main repo for this project is the NinesStack fork

Sidecar is a dynamic service discovery platform requiring no external coordination service. It's a peer-to-peer system that uses a gossip protocol for all communication between hosts. Sidecar health checks local services and announces them to peer systems. It's Docker-native so your containerized applications work out of the box. It's designed to be Available, Partition tolerant, and eventually consistent—where "eventually" is a very short time window on the matter of a few seconds.

Sidecar is part of a small ecosystem of tools. It can stand entirely alone or can also leverage:

  • Lyft's Envoy Proxy - In less than a year it is fast becoming a core microservices architecture component. Sidecar implements the Envoy proxy SDS, CDS, LDS (V1) and gRPC (V2) APIs. These allow a standalone Envoy to be entirely configured by Sidecar. This is best used with NinesStack's Envoy proxy container.

  • haproxy-api - A separation layer that allows Sidecar to drive HAproxy in a separate container. It also allows a local HAproxy to be configured against a remote Sidecar instance.

  • sidecar-executor - A Mesos executor that integrates with Sidecar, allowing your containers to be health checked by Sidecar for both service health and service discovery. Also supports a number of extra features including Vault integration for secrets management.

  • sidecar-dns - a working, but WIP, project to serve DNS SRV records from Sidecar services state.

Overview in Brief

Services communicate to each other through a proxy (Envoy or HAproxy) instance on each host that is itself managed and configured by Sidecar. This is, in effect, a half service mesh where outbound connections go through the proxy, but inbound requests do not. This has most of the advantages of service mesh with a lot less complexity to manage. It is inspired by Airbnb's SmartStack. But, we believe it has a few advantages over SmartStack:

  • Eventually consistent model - a better fit for real world microservices
  • Native support for Docker (works without Docker, too!)
  • No dependence on Zookeeper or other centralized services
  • Peer-to-peer, so it works on your laptop or on a large cluster
  • Static binary means it's easy to deploy, and there is no interpreter needed
  • Tiny memory usage (under 20MB) and few execution threads means its very light weight

See it in Action: We presented Sidecar at Velocity 2015 and recorded a YouTube video demonstrating Sidecar with Centurion, deploying services in Docker containers, and seeing Sidecar discover and health check them. The second video shows the current state of the UI which is improved since the first video.

YouTube Video YouTube Video2

Complete Overview and Theory

Sidecar Architecture

Sidecar is an eventually consistent service discovery platform where hosts learn about each others' state via a gossip protocol. Hosts exchange messages about which services they are running and which have gone away. All messages are timestamped and the latest timestamp always wins. Each host maintains its own local state and continually merges changes in from others. Messaging is over UDP except when doing anti-entropy transfers.

There is an anti-entropy mechanism where full state exchanges take place between peer nodes on an intermittent basis. This allows for any missed messages to propagate, and helps keep state consistent across the cluster.

Sidecar hosts join a cluster by having a set of cluster seed hosts passed to them on the command line at startup. Once in a cluster, the first thing a host does is merge the state directly from another host. This is a big JSON blob that is delivered over a TCP session directly between the hosts.

Now the host starts continuously polling its own services and reviewing the services that it has in its own state, sleeping a couple of seconds in between. It announces its services as UDP gossip messages every couple of seconds, and also announces tombstone records for any services which have gone away. Likewise, when a host leaves the cluster, any peers that were notified send tombstone records for all of its services. These eventually converge and the latest records should propagate everywhere. If the host rejoins the cluster, it will announce new state every few seconds so the services will be picked back up.

There are lifespans assigned to both tombstone and alive records so that:

  1. A service that was not correctly tombstoned will go away in short order
  2. We do not continually add to the tombstone state we are carrying

Because the gossip mechanism is UDP and a service going away is a higher priority message, each tombstone is sent twice initially, followed by once a second for 10 seconds. This delivers reliable messaging of service death.

Timestamps are all local to the host that sent them. This is because we can have clock drift on various machines. But if we always look at the origin timestamp they will at least be comparable to each other by all hosts in the cluster. The one exception to this is that if clock drift is more than a second or two, the alive lifespan may be negatively impacted.

Running it

You can download the latest release from the GitHub Releases page.

If you'd rather build it yourself, you should install the latest version of the Go compiler. Sidecar has not been tested with gccgo, only the mainstream Go compiler.

It's a Go application and the dependencies are all vendored into the vendor/ directory so you should be able to build it out of the box.

$ go build

Or you can run it like this:

$ go run *.go --cluster-ip <boostrap_host>

You always need to supply at least one IP address or hostname with the --cluster-ip argument (or via the SIDECAR_SEEDS environment variable). If are running solo, or are the first member, this can be your own hostname. You may specify the argument multiple times to have multiple hosts. It is recommended to use more than one when possible.

Note: --cluster-ip will overwrite the values passed into the SIDECAR_SEEDS environment variable.

Running in a Container

The easiest way to deploy Sidecar to your Docker fleet is to run it in a container itself. Instructions for doing that are provided.

Nitro Software maintains builds of the Docker container image on Docker Hub. Note that the README describes how to configure this container.

Configuration

Sidecar configuration is done through environment variables, with a few options also supported on the command line. Once the configuration has been parsed, Sidecar will use Rubberneck to print out the values that were used. The environment variable are as follows. Defaults are in bold at the end of the line:

  • SIDECAR_LOGGING_LEVEL: The logging level to use (debug, info, warn, error) info

  • SIDECAR_LOGGING_FORMAT: Logging format to use (text, json) text

  • SIDECAR_DISCOVERY: Which discovery backends to use as a csv array (static, docker, kubernetes_api) [ docker ]

  • SIDECAR_SEEDS: csv array of IP addresses used to seed the cluster.

  • SIDECAR_CLUSTER_NAME: The name of the Sidecar cluster. Restricts membership to hosts with the same cluster name.

  • SIDECAR_BIND_PORT: Manually override the Memberlist bind port 7946

  • SIDECAR_ADVERTISE_IP: Manually override the IP address Sidecar uses for cluster membership.

  • SIDECAR_EXCLUDE_IPS: csv array of IPs to exclude from interface selection [ 192.168.168.168 ]

  • SIDECAR_STATS_ADDR: An address to send performance stats to. none

  • SIDECAR_PUSH_PULL_INTERVAL: How long to wait between anti-entropy syncs. 20s

  • SIDECAR_GOSSIP_MESSAGES: How many times to gather messages per round. 15

  • SIDECAR_DEFAULT_CHECK_ENDPOINT: Default endpoint to health check services on /version

  • SERVICES_NAMER: Which method to use to extract service names. In both cases it will fall back to image name. (docker_label, regex) docker_label.

  • SERVICES_NAME_MATCH: The regexp to use to extract the service name from the container name.

  • SERVICES_NAME_LABEL: The Docker label to use to identify service names ServiceName

  • DOCKER_URL: How to connect to Docker if Docker discovery is enabled. unix:///var/run/docker.sock

  • STATIC_CONFIG_FILE: The config file to use if static discovery is enabled static.json

  • LISTENERS_URLS: If we want to statically configure any event listeners, the URLs should go in a csv array here. See Listeners section below for more on dynamic listeners.

  • HAPROXY_DISABLE: Disable management of HAproxy entirely. This is useful if you need to run without a proxy or are using something like haproxy-api to manage HAproxy based on Sidecar events. You should also use this setting if you are using Envoy as your proxy.

  • HAPROXY_RELOAD_COMMAND: The reload command to use for HAproxy sane defaults

  • HAPROXY_VERIFY_COMMAND: The verify command to use for HAproxy sane defaults

  • HAPROXY_BIND_IP: The IP that HAproxy should bind to on the host 192.168.168.168

  • HAPROXY_TEMPLATE_FILE: The source template file to use when writing HAproxy configs. This is a Go text template. views/haproxy.cfg

  • HAPROXY_CONFIG_FILE: The path where the haproxy.cfg file will be written. Note that if you change this you will need to update the verify and reload commands. /etc/haproxy.cfg

  • HAPROXY_PID_FILE: The path where HAproxy's PID file will be written. Note that if you change this you will need to update the verify and reload commands. /var/run/haproxy.pid

  • HAPROXY_USER: The Unix user under which HAproxy should run haproxy

  • HAPROXY_GROUP: The Unix group under which HAproxy should run haproxy

  • HAPROXY_USE_HOSTNAMES: Should we write hostnames in the HAproxy config instead of IP addresses? false

  • ENVOY_USE_GRPC_API: Enable the Envoy gRPC API (V2) true

  • ENVOY_BIND_IP: The IP that Envoy should bind to on the host 192.168.168.168

  • ENVOY_USE_HOSTNAMES: Should we write hostnames in the Envoy config instead of IP addresses? false

  • ENVOY_GRPC_PORT: The port for the Envoy API gRPC server 7776

  • KUBE_API_IP: The IP address at which to reach the Kubernetes API 127.0.0.1

  • KUBE_API_PORT: The port to use to contact the Kubernetes API 8080

  • NAMESPACE: The namespace against which we should do discovery default

  • KUBE_TIMEOUT: How long until we time out calling the Kube API? 3s

  • CREDS_PATH: Where do we find the token file containing API auth credentials? /var/run/secrets/kubernetes.io/serviceaccount

  • ANNOUNCE_ALL_NODES: Should we query the API and announce every node is running the service? This is useful to represent the K8s cluster with a single Sidecar. false

Ports

Sidecar requires both TCP and UDP protocols be open on the port configured via SIDECAR_BIND_PORT (default 7946) through any network filters or firewalls between it and any peers in the cluster. This is the port that the gossip protocol (Memberlist) runs on.

Discovery

Sidecar supports Docker-based discovery, a discovery mechanism where you publish services into a JSON file locally, called "static", and from the Kubernetes API. These can then be advertised as running services just like they would be from a Docker host. These are configured with the SIDECAR_DISCOVERY environment variable. Using all of them would look like:

export SIDECAR_DISCOVERY=static,docker,kubernetes_api

Zero or more options may be supplied. Note that if nothing is in this section, Sidecar will only participate in a cluster but will not announce anything.

Configuring Docker Discovery

Sidecar currently accepts a single option for Docker-based discovery, the URL to use to connect to Docker. Ideally this will be the same machine that Sidecar runs on because it makes assumptions about addresses. By default it will use the standard Docker Unix domain socket. You can change this with the DOCKER_URL env var. This needs to be a url that works with the Docker client.

Note that Sidecar only supports a single URL, unlike the Docker CLI tool.

NOTE Sidecar can now use the normal Docker environment variables for configuring Docker discovery. If you unset DOCKER_URL entirely, it will fall back to trying to use environment variables to configure Docker. It uses the standard variables like DOCKER_HOST, TLS_VERIFY, etc.

Docker Labels

When running Docker discovery, Sidecar relies on Docker labels to understand how to handle a service it has discovered. It uses these to:

  1. Understand how to map container ports to proxy ports. ServicePort_XXX
  2. How to name the service. ServiceName=
  3. How to health check the service. HealthCheck and HealthCheckArgs
  4. Whether or not the service is a receiver of Sidecar change events. SidecarListener
  5. Whether or not Sidecar should entirely ignore this service. SidecarDiscovery
  6. Envoy or HAproxy proxy behavior. ProxyMode

Service Ports Services may be started with one or more ServicePort_xxx labels that help Sidecar to understand ports that are mapped dynamically. This controls the port on which the proxy will listen for the service as well. If I have a service where the container is built with EXPOSE 80 and I want my proxy to listen on port 8080 then I will add a Docker label to the service in the form:

	ServicePort_80=8080

With dynamic port bindings, Docker may then bind that to 32767 but Sidecar will know which service and port that belongs.

Health Checks If you services are not checkable with the default settings, they need to have two Docker labels defining how they are to be health checked. To health check a service on port 9090 on the local system with an HttpGet check, for example, you would use the following labels:

	HealthCheck=HttpGet
	HealthCheckArgs=http://:9090/status

The currently available check types are HttpGet, External and AlwaysSuccessful. External checks will run the command specified in the HealthCheckArgs label (in the context of a bash shell). An exit status of 0 is considered healthy and anything else is unhealthy. Nagios checks work very well with this mode of health checking.

Excluding From Discovery Additionally, it can sometimes be nice to exclude certain containers from discovery. This is particularly useful if you are running Sidecar in a container itself. This is accomplished with another Docker label like so:

	SidecarDiscover=false

Proxy Behavior By default, HAProxy or Envoy will run in HTTP mode. The mode can be changed to TCP by setting the following Docker label:

ProxyMode=tcp

You may also enable Websocket support where it's available (e.g. in Envoy) by setting:

ProxyMode=ws

Templating In Labels You sometimes need to pass information in the Docker labels which is not available to you at the time of container creation. One example of this is the need to identify the actual Docker-bound port when running the health check. For this reason, Sidecar allows simple templating in the labels. Here's an example.

If you have a service that is exposing port 8080 and Docker dynamically assigns it the port 31445 at runtime, your health check for that port will be impossible to define ahead of time. But with templating we can say:

--label HealthCheckArgs="http://{{ host }}:{{ tcp 8080 }}/"

This will then fill the template fields, at call time, with the current hostname and the actual port that Docker bound to your container's port 8080. Querying of UDP ports works as you might expect, by calling {{ udp 53 }} for example.

Note that the tcp and udp method calls in the templates refer only to ports mapped with ServicePort labels. You will need to use the port number that you expect the proxy to use.

Configuring Static Discovery

Static Discovery requires an entry in the SIDECAR_DISCOVERY variable of static. It will then look for a file configured with STATIC_CONFIG_FILE to export services. This file is usually static.json in the current working directory of the process.

A static discovery file might look like this:

[
    {
        "Service": {
            "Name": "some_service",
            "Image": "bb6268ff91dc42a51f51db53846f72102ed9ff3f",
            "Ports": [
                {
                    "Type": "tcp",
                    "Port": 10234,
					"ServicePort": 9999
                }
            ],
			"ProxyMode": "http",
        },
        "Check": {
            "Type": "HttpGet",
            "Args": "http://:10234/"
        }
    },
	{
	...
	}
]

Here we've defined both the service itself and the health check to use to validate its status. It supports a single health check per service. You should supply something in place of the value for Image that is meaningful to you. Usually this is a version or git commit string. It will show up in the Sidecar web UI.

A further example is available in the fixtures/ directory used by the tests.

Configuring Kubernetes API Discovery

This method of discovery will enale you to bridge together an existing Sidecar cluster with a Kubernetes cluster that will make services availabel to the Sidecar cluster. It will announce all of the Kubernetes services that it finds available and map them to a port in the 30000+ range, with the expectation being that your have configured services to run with a NodePort in that range.

This is most useful for transitioning services from one cluster to another. You can run one or more Sidecar instances per Kubernetes cluster and they will show up like services exported from other discovery mechanisms with the exception that version information is not passed. The environment variables for configuring the behavior of this discovery method are described above.

Sidecar Events and Listeners

Services which need to know about service discovery change events can subscribe to Sidecar events. Any time a significant change happens, the listener will receive an update over HTTP from Sidecar. There are three mechanisms by which a service can subscribe to Sidecar events:

  1. Add the endpoint in the LISTENERS_URLS env var, e.g.:

    export LISTENERS_URLS="http://localhost:7778/api/update"

    This is an array and can be separated with spaces or commas.

  2. Add a Docker label to the subscribing service in the form SidecarListener=10005 where 10005 is a port that is mapped to a ServicePort with a Docker label like ServicePort_80=10005. This port will then receive all updates on the /sidecar/update endpoint. The subscription will be dynamically added and removed when the service starts or stops.

  3. Add the listener export to the static.json file exposed by static services. The ListenPort is a top-level setting for the Target and is of the form ListenPort: 10005 inside the Target definition.

Monitoring It

The logging output is pretty good in the normal info level. It can be made quite verbose in debug mode, and contains lots of information about what's going on and what the current state is. The web interface also contains a lot of runtime information on the cluster and the services. If you are running HAproxy, it's also recommended that you expose the HAproxy stats port on 3212 so that Sidecar can find it.

Currently the web interface runs on port 7777 on each machine that runs sidecar.

The /ui/services endpoint is a very textual web interface for humans. The /api/services.json endpoint is JSON-encoded. The JSON is still pretty-printed so it's readable by humans.

Sidecar API

Other than the UI that lives on the base URL, there is a minimalist API available for querying Sidecar. It supports the following endpoints:

  • /services.json: This returns a big JSON blob sorted and grouped by service.
  • /state.json: Returns the whole internal state blob in the internal representation order (servers -> server -> service -> instances)
  • /services/<service name>.json: Returns the same format as the /service.json endpoint, but only contains data for a single service.
  • /watch: Inconsistenly named endpoint that returns JSON blobs on a long-poll basis every time the internal state changes. Useful for anything that needs to know what the ongoing service status is.

Sidecar can also be configured to post the internal state to HTTP endpoints on any change event. See the "Sidecar Events and Listeners" section.

Envoy Proxy Support

Envoy uses a very different model than HAproxy and thus Sidecar's support for it is quite different from its support for HAproxy.

When using the REST-based LDS API (V1), Envoy makes requests to a variety of discovery service APIs on a timed basis. Sidecar currently implements three of these: the Cluster Discovery Service (CDS), the Service Discovery Service (SDS), and the Listeners Discovery Service (LDS). When using the gRPC V2 API, Sidecar sends updates to Envoy as soon as possible via gRPC.

Note that the LDS API (V1) has been deprecated by Envoy and it's recommended to use the gRPC-based V2 API.

Nitro builds and supports an Envoy container that is tested and works against Sidecar. This is the easiest way to run Envoy with Sidecar. You can find an example container configuration here if you need to configure it differently from Nitro's recommended setup.

The critical component is that the Envoy proxy needs to be able to talk to the Sidecar API. By default the Nitro container assumes that Sidecar will be running on 192.168.168.168:7777. If your sidecar is addressable on that address, you can start the envoy container with your platform's equivalent of the following Docker command:

docker run -i -t --net host --cap-add NET_ADMIN gonitro/envoyproxy:latest

Note: This assumes host networking mode so that Envoy can freely open and close listeners. Beware that the docker (Linux) bridge network is not reachable on OSX hosts, due to the way containers are run under HyperKit, so we suggest trying this on Linux instead.

Contributing

Contributions are more than welcome. Bug reports with specific reproduction steps are great. If you have a code contribution you'd like to make, open a pull request with suggested code.

Pull requests should:

  • Clearly state their intent in the title
  • Have a description that explains the need for the changes
  • Include tests!
  • Not break the public API

Ping us to let us know you're working on something interesting by opening a GitHub Issue on the project.

By contributing to this project you agree that you are granting New Relic a non-exclusive, non-revokable, no-cost license to use the code, algorithms, patents, and ideas in that code in our products if we so choose. You also agree the code is provided as-is and you provide no warranties as to its fitness or correctness for any purpose

Logo

The logo is used with kind permission from Picture Esk.

More Repositories

1

centurion

A mass deployment tool for Docker fleets
Ruby
1,744
star
2

newrelic-ruby-agent

New Relic RPM Ruby Agent
Ruby
1,180
star
3

node-newrelic

New Relic Node.js agent code base. Developers are welcome to create pull requests here, please see our contributing guidelines. For New Relic technical support, please go to http://support.newrelic.com.
JavaScript
927
star
4

go-agent

New Relic Go Agent
Go
719
star
5

rusty-hog

A suite of secret scanners built in Rust for performance. Based on TruffleHog (https://github.com/dxa4481/truffleHog) which is written in Python.
Rust
447
star
6

elixir_agent

New Relic's Open Source Elixir Agent
Elixir
253
star
7

terraform-provider-newrelic

Terraform provider for New Relic
Go
201
star
8

rpm_contrib

Extra Instrumentation for the New Relic RPM Gem
192
star
9

newrelic-java-agent

The New Relic Java agent
Java
183
star
10

docs-website

Source code for @newrelic docs. We welcome pull requests and questions on our docs!
MDX
170
star
11

newrelic-python-agent

New Relic Python Agent
Python
156
star
12

newrelic_aws_cloudwatch_plugin

New Relic AWS Cloudwatch Plugin
Ruby
154
star
13

newrelic-cli

The New Relic Command Line Interface
Go
128
star
14

check_docker

A Go Nagios check for Docker
Go
126
star
15

newrelic_api

Documentation, Active Resource Helper, and test code for the RPM REST API
Ruby
119
star
16

newrelic-quickstarts

New Relic One quickstarts help accelerate your New Relic journey by providing immediate value for your specific use cases.
TypeScript
109
star
17

nri-flex

An application-agnostic, all-in-one New Relic integration integration
Go
107
star
18

opensource-website

Source code for New Relic's Opensource site.
JavaScript
105
star
19

newrelic-opentelemetry-examples

Examples for sending OpenTelemetry sourced data to New Relic.
Java
103
star
20

newrelic-php-agent

The New Relic PHP Agent
C
103
star
21

infrastructure-agent

New Relic Infrastructure Agent
Go
102
star
22

helm-charts

Helm charts for New Relic applications
Smarty
95
star
23

infrastructure-agent-ansible

Ansible role for installing New Relic Infrastructure agent
Python
95
star
24

nr1-workshop

Self-paced training workshop for the NR1 CLI/SDK
JavaScript
85
star
25

newrelic_mysql_java_plugin

MySQL Metrics Plugin
Java
79
star
26

newrelic-browser-agent

New Relic Browser Agent
HTML
75
star
27

newrelic-lambda-extension

An AWS Lambda Extension to collect, enhance and transport telemetry to New Relic for AWS Lambda functions without requiring an external transport such as CloudWatch Logs or Kinesis.
Go
74
star
28

newrelic-dotnet-agent

The New Relic .NET language agent.
C#
74
star
29

newrelic-node-nextjs

New Relic Next.js instrumentation for the Node Agent
JavaScript
73
star
30

newrelic-client-go

New Relic Client for the Go programming language
Go
68
star
31

entity-definitions

The definition files contained in this repository are mappings between the telemetry attributes NewRelic ingests, and the entities users can interact with. If you have telemetry from any source that is not supported out of the box, you can propose a mapping for it by opening a PR.
JavaScript
67
star
32

node-native-metrics

Optional native module for collecting low-level Node & V8 metrics
C++
58
star
33

k8s-webhook-cert-manager

Generate certificate suitable for use with any Kubernetes Mutating Webhook.
Shell
58
star
34

c-sdk

New Relic C SDK
C
57
star
35

serverless-newrelic-lambda-layers

A Serverless plugin to install New Relic's AWS Lambda layers without requiring a code change.
TypeScript
55
star
36

newrelic-node-apollo-server-plugin

JavaScript
53
star
37

deployment-marker-action

Github Action for recording a Deployment Marker in New Relic
Shell
50
star
38

aws-log-ingestion

AWS Serverless Application that sends log data from CloudWatch Logs to New Relic Infrastructure - Cloud Integrations.
Python
50
star
39

absinthe-schema-stitching-example

Absinthe Schema Stitching Example
Elixir
49
star
40

nr1-cloud-optimize

NR1 Cloud Optimize allows you to Identify right-sizing opportunities and potential savings of your AWS, GCP, and Azure instances across your cloud environment.
JavaScript
48
star
41

newrelic-lambda-cli

A CLI to install the New Relic AWS Lambda integration and layers.
Python
47
star
42

infra-integrations-sdk

New Relic Infrastructure Integrations SDK
Go
45
star
43

go_nagios

Go lang package for writing Nagios checks
Go
43
star
44

newrelic-kubernetes-operator

Operator to create New Relic configuration in Kubernetes
Go
42
star
45

dPerf

Distributed Mobile CPU Profiling
Objective-C
41
star
46

newrelic-telemetry-sdk-go

Go library for sending telemetry data to New Relic
Go
40
star
47

newrelic-diagnostics-cli

NrDiag is a command line diagnostics tool for New Relic Products that was created by and is maintained by New Relic Global Technical Support
Go
39
star
48

developer-website

Source code for the New Relic Developer Site.
MDX
39
star
49

newrelic-telemetry-sdk-java

Java library for sending telemetry data to New Relic
Java
39
star
50

el-dorado-ui

Graph db query and rendering suitable for visualization of complex process structures
Ruby
37
star
51

newrelic-ruby-kata

Using New Relic and Heroku, see how many things you can find and fix to make this app perform fast!
Ruby
35
star
52

micrometer-registry-newrelic

ARCHIVED. TO SEND MICROMETER METRICS TO NEW RELIC, FOLLOW THE DIRECTION IN THE README.md. Micrometer registry implementation that sends data to New Relic as dimensional metrics.
Java
35
star
53

newrelic-lambda-layers

Source code and utilities to build and publish New Relic's public AWS Lambda layers.
Shell
34
star
54

newrelic_microsoft_sqlserver_plugin

New Relic Microsoft SQL Server Plugin
C#
33
star
55

nri-prometheus

Fetch metrics in the Prometheus metrics inside or outside Kubernetes and send them to the New Relic Metrics platform.
Go
33
star
56

nr1-status-pages

NR1 Status Pages allows you to collect and display the statuses of key dependencies in one place.
JavaScript
33
star
57

nr-openai-observability

Easy to install OpenAI GPT monitoring tool.
Python
33
star
58

nri-kubernetes

New Relic integration for Kubernetes
Go
33
star
59

papers

Validates licenses of your Rails dependencies against a whitelist
Ruby
31
star
60

opentelemetry-exporter-go

New Relic's Golang OpenTelemetry Exporter
Go
30
star
61

newrelic-jfr-core

JFR library that adapts JFR events to the New Relic Telemetry SDK
Java
30
star
62

tutone

Generate Golang code from GraphQL schema introspection
Go
30
star
63

extends_newrelic_rpm

Gems that extend New Relic's Ruby agent (newrelic_rpm), linked via git submodules
Ruby
29
star
64

newrelic_plugin

New Relic Ruby Plugin Agent SDK
Ruby
29
star
65

newrelic-telemetry-sdk-python

A python library to send data to New Relic!
Python
29
star
66

infrastructure-agent-chef

Chef cookbook for installing New Relic Infrastructure agent
Ruby
28
star
67

nrjmx

Command line tool to connect to a JMX server and retrieve the MBeans it exposes.
Java
28
star
68

fluentd-examples

Sample FluentD configs
27
star
69

newrelic-fluent-bit-output

A Fluent Bit output plugin that sends logs to New Relic
Go
26
star
70

dkenv

A docker version switcher
Go
26
star
71

newrelic-unix-monitor

Monitoring service for Unix (AIX, Linux, HP-UX, MacOS, Solaris) systems
Java
26
star
72

marlowe

Experimental project for investigating interesting visualizations of APM data
JavaScript
25
star
73

wiki-sync-action

A GitHub Action that synchronizes the contents of a directory to the repository's Wiki.
Shell
25
star
74

futurestack14_badge

Source code for the badges from New Relic's FutureStack 2014 conference
Squirrel
24
star
75

nr-jenkins-plugin

Jenkins Plugin to send metrics to New Relic
Java
24
star
76

newrelic-telemetry-sdk-rust

Rust library for sending telemetry data to New Relic
Rust
23
star
77

quickstarts-synthetics-library

Repositories containing example/templated synthetic scripts for use in New Relic.
JavaScript
23
star
78

k8s-metadata-injection

Kubernetes metadata injection for New Relic APM to make a linkage between APM and Infrastructure data.
Go
23
star
79

newrelic-monolog-logenricher-php

Monolog components to enable New Relic Logs
PHP
22
star
80

agent_sdk_samples

Sample wrappers and code for the New Relic Agent SDK
C
22
star
81

nr1-learn-nrql

NR1 learn NRQL helps New Relic Customers quickly learn our custom query language - NRQL
JavaScript
21
star
82

nri-nginx

New Relic Infrastructure Nginx Integration
Go
21
star
83

nr1-slo-r

NR1 SLO-R allows you to define, calculate and report on service-level objective (SLO) attainment.
JavaScript
21
star
84

newrelic-airflow-plugin

Send airflow metrics to New Relic!
Python
20
star
85

newrelic-python-kata

Newrelic Python Kata
Python
20
star
86

newrelic_plugins_chef

Ruby
20
star
87

metrics_publish_java

New Relic Java Plugin Agent SDK
Java
20
star
88

newrelic-node-log-extensions

Source for the New Relic Node.js log framework extensions
JavaScript
20
star
89

newrelic-cordova-plugin

A Cordova plugin for the New Relic Mobile SDKs
JavaScript
19
star
90

newrelic-logenricher-dotnet

Extensions supporting New Relic Logging (Logs In Context)
C#
19
star
91

infrastructure-bundle

New Relic Infrastructure containerised agent bundle
Go
19
star
92

newrelic-cordova-ios

A PhoneGap / Cordova plugin for New Relic's iOS SDK
Objective-C
19
star
93

open-install-library

New Relic's open instrumentation installation recipe database and service
Shell
19
star
94

gatsby-theme-newrelic

Source code for New Relic's Gatsby site theme.
JavaScript
19
star
95

nr1-browser-analyzer

NR1 Browser Analyzer allows you to understand the impact and performance of your website.
JavaScript
18
star
96

nr1-quickstarts

[ARCHIVED] Community repository of New Relic dashboards, alerts, and installation instructions.
JavaScript
18
star
97

supervisor-remote-logging

Use supervisord to relay your application's stdout/stderr to syslog.
Python
18
star
98

webinar

Contains the sample code for the "Best Practices for Measuring Your Code Pipeline" webinar
18
star
99

video-agent-roku

New Relic Agent for Roku
Brightscript
16
star
100

ruby-metaprogramming-challenge

A fun metaprogramming challenge created at New Relic
Ruby
16
star