• Stars
    star
    7,518
  • Rank 5,123 (Top 0.2 %)
  • Language
    Go
  • License
    MIT License
  • Created over 7 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Kafka library in Go

kafka-go CircleCI Go Report Card GoDoc

Motivations

We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go client libraries for Kafka at the time of this writing was not ideal. The available options were:

  • sarama, which is by far the most popular but is quite difficult to work with. It is poorly documented, the API exposes low level concepts of the Kafka protocol, and it doesn't support recent Go features like contexts. It also passes all values as pointers which causes large numbers of dynamic memory allocations, more frequent garbage collections, and higher memory usage.

  • confluent-kafka-go is a cgo based wrapper around librdkafka, which means it introduces a dependency to a C library on all Go code that uses the package. It has much better documentation than sarama but still lacks support for Go contexts.

  • goka is a more recent Kafka client for Go which focuses on a specific usage pattern. It provides abstractions for using Kafka as a message passing bus between services rather than an ordered log of events, but this is not the typical use case of Kafka for us at Segment. The package also depends on sarama for all interactions with Kafka.

This is where kafka-go comes into play. It provides both low and high level APIs for interacting with Kafka, mirroring concepts and implementing interfaces of the Go standard library to make it easy to use and integrate with existing software.

Note:

In order to better align with our newly adopted Code of Conduct, the kafka-go project has renamed our default branch to main. For the full details of our Code Of Conduct see this document.

Kafka versions

kafka-go is currently tested with Kafka versions 0.10.1.0 to 2.7.1. While it should also be compatible with later versions, newer features available in the Kafka API may not yet be implemented in the client.

Go versions

kafka-go requires Go version 1.15 or later.

Connection GoDoc

The Conn type is the core of the kafka-go package. It wraps around a raw network connection to expose a low-level API to a Kafka server.

Here are some examples showing typical use of a connection object:

// to produce messages
topic := "my-topic"
partition := 0

conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
if err != nil {
    log.Fatal("failed to dial leader:", err)
}

conn.SetWriteDeadline(time.Now().Add(10*time.Second))
_, err = conn.WriteMessages(
    kafka.Message{Value: []byte("one!")},
    kafka.Message{Value: []byte("two!")},
    kafka.Message{Value: []byte("three!")},
)
if err != nil {
    log.Fatal("failed to write messages:", err)
}

if err := conn.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}
// to consume messages
topic := "my-topic"
partition := 0

conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
if err != nil {
    log.Fatal("failed to dial leader:", err)
}

conn.SetReadDeadline(time.Now().Add(10*time.Second))
batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max

b := make([]byte, 10e3) // 10KB max per message
for {
    n, err := batch.Read(b)
    if err != nil {
        break
    }
    fmt.Println(string(b[:n]))
}

if err := batch.Close(); err != nil {
    log.Fatal("failed to close batch:", err)
}

if err := conn.Close(); err != nil {
    log.Fatal("failed to close connection:", err)
}

To Create Topics

By default kafka has the auto.create.topics.enable='true' (KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE='true' in the bitnami/kafka kafka docker image). If this value is set to 'true' then topics will be created as a side effect of kafka.DialLeader like so:

// to create topics when auto.create.topics.enable='true'
conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
if err != nil {
    panic(err.Error())
}

If auto.create.topics.enable='false' then you will need to create topics explicitly like so:

// to create topics when auto.create.topics.enable='false'
topic := "my-topic"

conn, err := kafka.Dial("tcp", "localhost:9092")
if err != nil {
    panic(err.Error())
}
defer conn.Close()

controller, err := conn.Controller()
if err != nil {
    panic(err.Error())
}
var controllerConn *kafka.Conn
controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
if err != nil {
    panic(err.Error())
}
defer controllerConn.Close()


topicConfigs := []kafka.TopicConfig{
    {
        Topic:             topic,
        NumPartitions:     1,
        ReplicationFactor: 1,
    },
}

err = controllerConn.CreateTopics(topicConfigs...)
if err != nil {
    panic(err.Error())
}

To Connect To Leader Via a Non-leader Connection

// to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
conn, err := kafka.Dial("tcp", "localhost:9092")
if err != nil {
    panic(err.Error())
}
defer conn.Close()
controller, err := conn.Controller()
if err != nil {
    panic(err.Error())
}
var connLeader *kafka.Conn
connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
if err != nil {
    panic(err.Error())
}
defer connLeader.Close()

To list topics

conn, err := kafka.Dial("tcp", "localhost:9092")
if err != nil {
    panic(err.Error())
}
defer conn.Close()

partitions, err := conn.ReadPartitions()
if err != nil {
    panic(err.Error())
}

m := map[string]struct{}{}

for _, p := range partitions {
    m[p.Topic] = struct{}{}
}
for k := range m {
    fmt.Println(k)
}

Because it is low level, the Conn type turns out to be a great building block for higher level abstractions, like the Reader for example.

Reader GoDoc

A Reader is another concept exposed by the kafka-go package, which intends to make it simpler to implement the typical use case of consuming from a single topic-partition pair. A Reader also automatically handles reconnections and offset management, and exposes an API that supports asynchronous cancellations and timeouts using Go contexts.

Note that it is important to call Close() on a Reader when a process exits. The kafka server needs a graceful disconnect to stop it from continuing to attempt to send messages to the connected clients. The given example will not call Close() if the process is terminated with SIGINT (ctrl-c at the shell) or SIGTERM (as docker stop or a kubernetes restart does). This can result in a delay when a new reader on the same topic connects (e.g. new process started or new container running). Use a signal.Notify handler to close the reader on process shutdown.

// make a new reader that consumes from topic-A, partition 0, at offset 42
r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:   []string{"localhost:9092","localhost:9093", "localhost:9094"},
    Topic:     "topic-A",
    Partition: 0,
    MaxBytes:  10e6, // 10MB
})
r.SetOffset(42)

for {
    m, err := r.ReadMessage(context.Background())
    if err != nil {
        break
    }
    fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
}

if err := r.Close(); err != nil {
    log.Fatal("failed to close reader:", err)
}

Consumer Groups

kafka-go also supports Kafka consumer groups including broker managed offsets. To enable consumer groups, simply specify the GroupID in the ReaderConfig.

ReadMessage automatically commits offsets when using consumer groups.

// make a new reader that consumes from topic-A
r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    GroupID:   "consumer-group-id",
    Topic:     "topic-A",
    MaxBytes:  10e6, // 10MB
})

for {
    m, err := r.ReadMessage(context.Background())
    if err != nil {
        break
    }
    fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
}

if err := r.Close(); err != nil {
    log.Fatal("failed to close reader:", err)
}

There are a number of limitations when using consumer groups:

  • (*Reader).SetOffset will return an error when GroupID is set
  • (*Reader).Offset will always return -1 when GroupID is set
  • (*Reader).Lag will always return -1 when GroupID is set
  • (*Reader).ReadLag will return an error when GroupID is set
  • (*Reader).Stats will return a partition of -1 when GroupID is set

Explicit Commits

kafka-go also supports explicit commits. Instead of calling ReadMessage, call FetchMessage followed by CommitMessages.

ctx := context.Background()
for {
    m, err := r.FetchMessage(ctx)
    if err != nil {
        break
    }
    fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
    if err := r.CommitMessages(ctx, m); err != nil {
        log.Fatal("failed to commit messages:", err)
    }
}

When committing messages in consumer groups, the message with the highest offset for a given topic/partition determines the value of the committed offset for that partition. For example, if messages at offset 1, 2, and 3 of a single partition were retrieved by call to FetchMessage, calling CommitMessages with message offset 3 will also result in committing the messages at offsets 1 and 2 for that partition.

Managing Commits

By default, CommitMessages will synchronously commit offsets to Kafka. For improved performance, you can instead periodically commit offsets to Kafka by setting CommitInterval on the ReaderConfig.

// make a new reader that consumes from topic-A
r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    GroupID:        "consumer-group-id",
    Topic:          "topic-A",
    MaxBytes:       10e6, // 10MB
    CommitInterval: time.Second, // flushes commits to Kafka every second
})

Writer GoDoc

To produce messages to Kafka, a program may use the low-level Conn API, but the package also provides a higher level Writer type which is more appropriate to use in most cases as it provides additional features:

  • Automatic retries and reconnections on errors.
  • Configurable distribution of messages across available partitions.
  • Synchronous or asynchronous writes of messages to Kafka.
  • Asynchronous cancellation using contexts.
  • Flushing of pending messages on close to support graceful shutdowns.
  • Creation of a missing topic before publishing a message. Note! it was the default behaviour up to the version v0.4.30.
// make a writer that produces to topic-A, using the least-bytes distribution
w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:   "topic-A",
	Balancer: &kafka.LeastBytes{},
}

err := w.WriteMessages(context.Background(),
	kafka.Message{
		Key:   []byte("Key-A"),
		Value: []byte("Hello World!"),
	},
	kafka.Message{
		Key:   []byte("Key-B"),
		Value: []byte("One!"),
	},
	kafka.Message{
		Key:   []byte("Key-C"),
		Value: []byte("Two!"),
	},
)
if err != nil {
    log.Fatal("failed to write messages:", err)
}

if err := w.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}

Missing topic creation before publication

// Make a writer that publishes messages to topic-A.
// The topic will be created if it is missing.
w := &Writer{
    Addr:                   kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
    Topic:                  "topic-A",
    AllowAutoTopicCreation: true,
}

messages := []kafka.Message{
    {
        Key:   []byte("Key-A"),
        Value: []byte("Hello World!"),
    },
    {
        Key:   []byte("Key-B"),
        Value: []byte("One!"),
    },
    {
        Key:   []byte("Key-C"),
        Value: []byte("Two!"),
    },
}

var err error
const retries = 3
for i := 0; i < retries; i++ {
    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
    defer cancel()
    
    // attempt to create topic prior to publishing the message
    err = w.WriteMessages(ctx, messages...)
    if errors.Is(err, kafka.LeaderNotAvailable) || errors.Is(err, context.DeadlineExceeded) {
        time.Sleep(time.Millisecond * 250)
        continue
    }

    if err != nil {
        log.Fatalf("unexpected error %v", err)
    }
    break
}

if err := w.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}

Writing to multiple topics

Normally, the WriterConfig.Topic is used to initialize a single-topic writer. By excluding that particular configuration, you are given the ability to define the topic on a per-message basis by setting Message.Topic.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
    // NOTE: When Topic is not defined here, each Message must define it instead.
	Balancer: &kafka.LeastBytes{},
}

err := w.WriteMessages(context.Background(),
    // NOTE: Each Message has Topic defined, otherwise an error is returned.
	kafka.Message{
        Topic: "topic-A",
		Key:   []byte("Key-A"),
		Value: []byte("Hello World!"),
	},
	kafka.Message{
        Topic: "topic-B",
		Key:   []byte("Key-B"),
		Value: []byte("One!"),
	},
	kafka.Message{
        Topic: "topic-C",
		Key:   []byte("Key-C"),
		Value: []byte("Two!"),
	},
)
if err != nil {
    log.Fatal("failed to write messages:", err)
}

if err := w.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}

NOTE: These 2 patterns are mutually exclusive, if you set Writer.Topic, you must not also explicitly define Message.Topic on the messages you are writing. The opposite applies when you do not define a topic for the writer. The Writer will return an error if it detects this ambiguity.

Compatibility with other clients

Sarama

If you're switching from Sarama and need/want to use the same algorithm for message partitioning, you can either use the kafka.Hash balancer or the kafka.ReferenceHash balancer:

  • kafka.Hash = sarama.NewHashPartitioner
  • kafka.ReferenceHash = sarama.NewReferenceHashPartitioner

The kafka.Hash and kafka.ReferenceHash balancers would route messages to the same partitions that the two aforementioned Sarama partitioners would route them to.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:    "topic-A",
	Balancer: &kafka.Hash{},
}

librdkafka and confluent-kafka-go

Use the kafka.CRC32Balancer balancer to get the same behaviour as librdkafka's default consistent_random partition strategy.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:    "topic-A",
	Balancer: kafka.CRC32Balancer{},
}

Java

Use the kafka.Murmur2Balancer balancer to get the same behaviour as the canonical Java client's default partitioner. Note: the Java class allows you to directly specify the partition which is not permitted.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:    "topic-A",
	Balancer: kafka.Murmur2Balancer{},
}

Compression

Compression can be enabled on the Writer by setting the Compression field:

w := &kafka.Writer{
	Addr:        kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:       "topic-A",
	Compression: kafka.Snappy,
}

The Reader will by determine if the consumed messages are compressed by examining the message attributes. However, the package(s) for all expected codecs must be imported so that they get loaded correctly.

Note: in versions prior to 0.4 programs had to import compression packages to install codecs and support reading compressed messages from kafka. This is no longer the case and import of the compression packages are now no-ops.

TLS Support

For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS. Note: Connecting to a Kafka cluster with TLS enabled without configuring TLS on the Conn/Reader/Writer can manifest in opaque io.ErrUnexpectedEOF errors.

Connection

dialer := &kafka.Dialer{
    Timeout:   10 * time.Second,
    DualStack: true,
    TLS:       &tls.Config{...tls config...},
}

conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")

Reader

dialer := &kafka.Dialer{
    Timeout:   10 * time.Second,
    DualStack: true,
    TLS:       &tls.Config{...tls config...},
}

r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    GroupID:        "consumer-group-id",
    Topic:          "topic-A",
    Dialer:         dialer,
})

Writer

Direct Writer creation

w := kafka.Writer{
    Addr: kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"), 
    Topic:   "topic-A",
    Balancer: &kafka.Hash{},
    Transport: &kafka.Transport{
        TLS: &tls.Config{},
      },
    }

Using kafka.NewWriter

dialer := &kafka.Dialer{
    Timeout:   10 * time.Second,
    DualStack: true,
    TLS:       &tls.Config{...tls config...},
}

w := kafka.NewWriter(kafka.WriterConfig{
	Brokers: []string{"localhost:9092", "localhost:9093", "localhost:9094"},
	Topic:   "topic-A",
	Balancer: &kafka.Hash{},
	Dialer:   dialer,
})

Note that kafka.NewWriter and kafka.WriterConfig are deprecated and will be removed in a future release.

SASL Support

You can specify an option on the Dialer to use SASL authentication. The Dialer can be used directly to open a Conn or it can be passed to a Reader or Writer via their respective configs. If the SASLMechanism field is nil, it will not authenticate with SASL.

SASL Authentication Types

mechanism := plain.Mechanism{
    Username: "username",
    Password: "password",
}
mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

Connection

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

dialer := &kafka.Dialer{
    Timeout:       10 * time.Second,
    DualStack:     true,
    SASLMechanism: mechanism,
}

conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")

Reader

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

dialer := &kafka.Dialer{
    Timeout:       10 * time.Second,
    DualStack:     true,
    SASLMechanism: mechanism,
}

r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:        []string{"localhost:9092","localhost:9093", "localhost:9094"},
    GroupID:        "consumer-group-id",
    Topic:          "topic-A",
    Dialer:         dialer,
})

Writer

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

// Transports are responsible for managing connection pools and other resources,
// it's generally best to create a few of these and share them across your
// application.
sharedTransport := &kafka.Transport{
    SASL: mechanism,
}

w := kafka.Writer{
	Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:     "topic-A",
	Balancer:  &kafka.Hash{},
	Transport: sharedTransport,
}

Client

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

// Transports are responsible for managing connection pools and other resources,
// it's generally best to create a few of these and share them across your
// application.
sharedTransport := &kafka.Transport{
    SASL: mechanism,
}

client := &kafka.Client{
    Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
    Timeout:   10 * time.Second,
    Transport: sharedTransport,
}

Reading all messages within a time range

startTime := time.Now().Add(-time.Hour)
endTime := time.Now()
batchSize := int(10e6) // 10MB

r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    Topic:     "my-topic1",
    Partition: 0,
    MaxBytes:  batchSize,
})

r.SetOffsetAt(context.Background(), startTime)

for {
    m, err := r.ReadMessage(context.Background())

    if err != nil {
        break
    }
    if m.Time.After(endTime) {
        break
    }
    // TODO: process message
    fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
}

if err := r.Close(); err != nil {
    log.Fatal("failed to close reader:", err)
}

Logging

For visiblity into the operations of the Reader/Writer types, configure a logger on creation.

Reader

func logf(msg string, a ...interface{}) {
	fmt.Printf(msg, a...)
	fmt.Println()
}

r := kafka.NewReader(kafka.ReaderConfig{
	Brokers:     []string{"localhost:9092", "localhost:9093", "localhost:9094"},
	Topic:       "my-topic1",
	Partition:   0,
	Logger:      kafka.LoggerFunc(logf),
	ErrorLogger: kafka.LoggerFunc(logf),
})

Writer

func logf(msg string, a ...interface{}) {
	fmt.Printf(msg, a...)
	fmt.Println()
}

w := &kafka.Writer{
	Addr:        kafka.TCP("localhost:9092"),
	Topic:       "topic",
	Logger:      kafka.LoggerFunc(logf),
	ErrorLogger: kafka.LoggerFunc(logf),
}

Testing

Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the KAFKA_SKIP_NETTEST=1 environment variables will skip those tests.

Run Kafka locally in docker

docker-compose up -d

Run tests

KAFKA_VERSION=2.3.1 \
  KAFKA_SKIP_NETTEST=1 \
  go test -race ./...

(or) to clean up the cached test results and run tests:

go clean -cache && make test

More Repositories

1

evergreen

🌲 Evergreen React UI Framework by Segment
JavaScript
12,161
star
2

analytics.js

The hassle-free way to integrate analytics into any web application.
JavaScript
4,775
star
3

myth

A CSS preprocessor that acts like a polyfill for future versions of the spec.
JavaScript
4,345
star
4

ksuid

K-Sortable Globally Unique IDs
Go
4,121
star
5

daydream

A chrome extension to record your actions into a nightmare or puppeteer script
JavaScript
2,768
star
6

chamber

CLI for managing secrets
Go
2,283
star
7

stack

A set of Terraform modules for configuring production infrastructure with AWS
HCL
2,098
star
8

ui-box

Blazing Fast React UI Primitive
TypeScript
1,052
star
9

encoding

Go package containing implementations of efficient encoding, decoding, and validation APIs.
Go
911
star
10

golines

A golang formatter that fixes long lines
Go
803
star
11

asm

Go library providing algorithms optimized to leverage the characteristics of modern CPUs
Go
795
star
12

analytics-node

The hassle-free way to integrate analytics into any node application.
JavaScript
593
star
13

topicctl

Tool for declarative management of Kafka topics
Go
558
star
14

aws-okta

aws-vault like tool for Okta authentication
Go
541
star
15

niffy

Perceptual diffing suite built on Nightmare
JavaScript
535
star
16

analytics-ios

The hassle-free way to integrate analytics into any iOS application.
Objective-C
388
star
17

analytics-ruby

The hassle-free way to integrate analytics into any Ruby application.
Ruby
374
star
18

analytics-android

The hassle-free way to add analytics to your Android app.
Java
373
star
19

analytics-react-native

The hassle-free way to add analytics to your React-Native app.
TypeScript
337
star
20

consent-manager

Drop-in consent management plugin for analytics.js
TypeScript
326
star
21

parquet-go

Go library to read/write Parquet files
Go
314
star
22

ts-mysql-plugin

A typescript language service plugin that gives superpowers to SQL tagged template literals.
TypeScript
312
star
23

analytics-next

Segment Analytics.js 2.0
TypeScript
294
star
24

specs

Peer into your ECS clusters
JavaScript
273
star
25

fasthash

Go package porting the standard hashing algorithms to a more efficient implementation.
Go
261
star
26

ctlstore

Control Data Store
Go
261
star
27

ware

Easily create your own middleware layer.
JavaScript
254
star
28

analytics-php

The hassle-free way to integrate analytics into any php application.
PHP
252
star
29

analytics-python

The hassle-free way to integrate analytics into any python application.
Python
231
star
30

chrome-sidebar

Easiest way to embed an iframe as a chrome extension
JavaScript
208
star
31

typewriter

Type safety + intellisense for your Segment analytics
TypeScript
206
star
32

nsq.js

NSQ client for nodejs
JavaScript
203
star
33

stats

Go package for abstracting stats collection
Go
202
star
34

threat-modeling-training

Segment's Threat Modeling training for our engineers
197
star
35

in-eu

🇪🇺 privacy first EU detection library for browsers
JavaScript
180
star
36

kubectl-curl

Kubectl plugin to run curl commands against kubernetes pods
Go
167
star
37

go-prompt

Go terminal prompts.
Go
167
star
38

analytics-react

[DEPRECATED AND UNSUPPORTED] The hassle-free way to integrate analytics into your React application.
JavaScript
160
star
39

is-url

Loosely validate a URL.
JavaScript
160
star
40

cwlogs

CLI tool for reading logs from Cloudwatch Logs
Go
142
star
41

kubeapply

A lightweight tool for git-based management of Kubernetes configs
Go
141
star
42

analytics-go

Segment analytics client for Go
Go
136
star
43

analytics.js-core

The hassle-free way to integrate analytics into any web application.
TypeScript
132
star
44

dependency-report

Generate usage reports of your JS dependencies
JavaScript
129
star
45

ecs-logs

Log forwarder for services ran by ecs-agent.
Go
115
star
46

analytics-java

The hassle-free way to integrate analytics into any java application.
Java
113
star
47

analytics.js-integrations

Monorepo housing Segment's analytics.js integrations
JavaScript
112
star
48

go-athena

Golang database/sql driver for AWS Athena
Go
107
star
49

Analytics.NET

The hassle-free way to integrate analytics into any C# / .NET application.
C#
107
star
50

go-queue

NSQ consumer convenience layer.
Go
104
star
51

analytics-swift

The hassle-free way to add Segment analytics to your Swift app (iOS/tvOS/watchOS/macOS/Linux).
Swift
102
star
52

xml-parser

simple non-compliant xml parser for nodejs
JavaScript
101
star
53

backo

exponential backoff without the weird cruft
JavaScript
99
star
54

analytics-vue

The hassle-free way to integrate analytics into your Vue application.
Vue
98
star
55

nsq-go

Go package providing tools for building NSQ clients, servers and middleware.
Go
94
star
56

consul-go

Go package providing building blocks for interacting with Consul.
Go
90
star
57

frictionless-signup

Reduce friction and increase customer data in your online forms using Segment & Clearbit
JavaScript
86
star
58

superagent-retry

Retry superagent requests for common hangups
JavaScript
85
star
59

pg-escape

sprintf-style postgres query escaping and helper functions
JavaScript
84
star
60

conf

Go package for loading program configuration from multiple sources.
Go
81
star
61

orbital

🚀🌏 A simple end-to-end testing framework for Go
Go
80
star
62

functions-library

A library of example functions to use with the Segment Developer Center
JavaScript
75
star
63

inbound

A url and referrer parsing library for node.
JavaScript
72
star
64

decibel

A small iOS app for recording office noise dB levels to Datadog.
Swift
69
star
65

analytics-angular

The hassle-free way to integrate analytics into your Angular application.
TypeScript
68
star
66

events

Go package for routing, formatting and publishing events produced by a program.
Go
62
star
67

glue

Generate typed Golang RPC clients from server code
Go
60
star
68

pingdummy

Example application for segmentio/stack
JavaScript
60
star
69

go-loggly

Loggly client for Go
Go
59
star
70

analytics-rust

Segment analytics client for Rust
Rust
55
star
71

retrofit-jsonrpc

Json-RPC with Retrofit.
Java
54
star
72

snippet

Render the analytics.js snippet.
JavaScript
53
star
73

nsq_to_redis

NSQ ✈ Redis {pubsub, capped lists}
Go
52
star
74

segment-proxy

Proxies requests to the Segment CDN and Tracking API.
Go
51
star
75

statsy

Simple statsd client for nodejs
JavaScript
49
star
76

sherlock

A pluggable service-detection tool
JavaScript
49
star
77

is-email

Component: loosely validate an email address.
JavaScript
49
star
78

objconv

A Go package exposing encoder and decoders that support data streaming to and from multiple formats.
Go
49
star
79

cli

Go package providing high-level constructs for command-line tools.
Go
48
star
80

facade

Providing common fields for analytics integrations, since 2013.
JavaScript
47
star
81

agecache

An LRU cache with support for max age
Go
47
star
82

validate-form

Easily validate a form element against a set of rules.
JavaScript
44
star
83

go-stats

Go stats ticker utility
Go
44
star
84

go-snakecase

Faster snakecase implementation
Go
43
star
85

utm-params

parse and get all utm parameters
JavaScript
42
star
86

aws-billing

An API to learn how much your AWS hosting costs every month
JavaScript
39
star
87

action-destinations

Action Destinations are the new way to build streaming destinations on Segment.
TypeScript
38
star
88

testdemo

Examples for https://segment.com/blog/5-advanced-testing-techniques-in-go/
Go
38
star
89

data-digger

Dig through structured messages in Kafka, S3, or local files
Go
37
star
90

segment-docs

Segment Documentation. Powered by Jekyll.
HTML
36
star
91

feature

Feature gate database designed for simplicity and efficiency.
Go
36
star
92

redis-go

Go package providing tools for building redis clients, servers and middleware.
Go
36
star
93

http_to_nsq

Publishes HTTP requests to NSQD (for CI webhooks etc)
Go
36
star
94

analytics.js-integration

The base integration factory used to create custom analytics integrations for analytics.js.
JavaScript
35
star
95

ebs-backup

Backup EBS Volumes
Go
34
star
96

Analytics.Xamarin

Analytics for Xamarin, a portable class library supporting iOS, Android, Mac OS, and others.
C#
34
star
97

go-hll

Go implementation of HLL that plays nicely with other languages
Go
34
star
98

terraform-segment-data-lakes

Terraform modules which create AWS resources for a Segment Data Lake.
HCL
34
star
99

analytics-kotlin

The hassle-free way to add Segment analytics to your Kotlin app (Android/JVM).
Kotlin
32
star
100

errors-go

Go package providing various error handling primitives.
Go
32
star