• This repository has been archived on 31/Mar/2022
  • Stars
    star
    133
  • Rank 262,765 (Top 6 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created about 7 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Ephemeral Hadoop clusters using Google Compute Platform

Spydra (Beta / Inactive)

License

Note This project is inactive.

Ephemeral Hadoop clusters using Google Compute Platform

Description

Spydra is "Hadoop Cluster as a Service" implemented as a library utilizing Google Cloud Dataproc and Google Cloud Storage. The intention of Spydra is to enable the use of ephemeral Hadoop clusters while hiding the complexity of cluster lifecycle management and keeping troubleshooting simple. Spydra is designed to be integrated as a hadoop jar replacement.

Spydra is part of Spotify's effort to migrate its data infrastructure to Google Compute Platform and is being used in production. The principles and the design of Spydra are based on our experiences in scaling and maintaining our Hadoop cluster to over 2500 nodes and over 100 PBs of capacity running about 20,000 independent jobs per day.

Spydra supports submitting data processing jobs to Dataproc as well as to existing on-premise Hadoop infrastructure and is designed to ease the migration to and/or dual use of Google Cloud Platform and on-premise infrastructure.

Spydra is designed to be very configurable and allows the usage of all job types and configurations supported by the gcloud dataproc clusters create and gcloud dataproc jobs submit commands.

Development Status

Spydra is the rewrite of a concept that has been developed at Spotify for more than a year. The current version of Spydra is in beta, used in production at Spotify, and actively developed and supported by our data infrastructure team.

Spydra is in beta and things might change but we are aiming at not breaking the currently exposed APIs and configuration.

Spydra at Spotify

At Spotify, Spydra is being used for our on-going migration to Google Cloud Platform. It handles the submission of on-premise Hadoop jobs as well as Dataproc jobs, simplifying the switch from on-premise Hadoop to Dataproc.

Spydra is packaged in a docker image that is used to deploy data pipelines. This docker image includes Hadoop tools and configurations to be able to submit to our on-premise Hadoop cluster as well as an installation of gcloud and other basic dependencies required to execute Hadoop jobs in our environment. Pipelines are then scheduled using Styx and orchestrated by Luigi which then invokes Spydra instead of hadoop jar.

Design

Spydra is built as a wrapper around Google Cloud Dataproc and designed not to have any central component. It exposes all functionality supported by Dataproc via its own configuration while adding some defaults. Spydra manages clusters and submits jobs invoking the gcloud dataproc command. Spydra ensures that clusters are eventually deleted by updating a heartbeat marker in the cluster's metadata and utilizes initialization-actions to set up a self-deletion script on the cluster to handle the deletion of the cluster in the event of client failures.

For submitting jobs to an existing on-premise Hadoop infrastructure, Spydra utilizes the hadoop jar command which is required to be installed and configured in the environment.

For Dataproc as well as on-premise submissions, Spydra will act similar to hadoop jar and print out driver output.

Credentials

Spydra is designed to ease the usage of Google Compute Platform credentials by utilizing service accounts. The same credential that is used locally by Spydra to manage the cluster and submit jobs, is also by default forwarded to the Hadoop cluster when calling Dataproc. This means that access rights to resources need only be given to a single set of credentials.

Storing Execution Data and Logs

To make job execution data available after an ephemeral cluster was shut down, and to provide similar functionality to the Hadoop MapReduce History Server, Spydra stores execution data and logs on Google Cloud Storage, grouping it by a user-defined client id. Typically client id is unique per job. The execution data and logs are then made available via Spydra commands. These allow spinning up a local MapReduce History Server to access execution data and logs as well as dumping them.

Autoscaler

Spydra has an experimental autoscaler which can be executed on the cluster. It monitors the current resource utilization on the cluster and scales the cluster according to a user defined utilization factor and maximum worker count by adding preemptible VMs. Note that the use of preemptible VMs might negatively impact performance as nodes might be shut down any time.

The autoscaler is being installed on the cluster using a Dataproc initialization-action.

Cluster Pooling

Spydra has experimental support for cluster pooling withing a single Google Compute Platform project. Cluster pooling can be used to limit the resources used by the job submissions, and also limit the cluster initialization overhead. The maximum number of clusters to be used can be defined as well as their maximum lifetime. Upon job submission, a random cluster is chosen to submit the job into. When reaching their maximum lifetime, pooled clusters are being deleted by the self-deletion mechanism.

Usage

Installation

There's a pre-built Spydra on maven central. This is built using the parameters from .travis.yml, the bucket spydra-init-actions is provided for by Spotify.

Prerequisites

To be able to use Dataproc and on-premise Hadoop, a few things need to be set up before using Spydra.

Spydra CLI

Spydra CLI supports multiple sub-commands:

Submission

$ java -jar spydra/target/spydra-VERSION-jar-with-dependencies.jar submit --help

usage: submit [options] [jobArgs]
    --clientid <arg>     client id, used as identifier in job history output
    --spydra-json <arg>  path to the spydra configuration json
    --jar <arg>          main jar path, overwrites the configured one if
                         set
    --jars <arg>         jar files to be shipped with the job, can occur
                         multiple times, overwrites the configured ones if
                         set
    --job-name <arg>     job name, used as dataproc job id
 -n,--dry-run            Do a dry run without executing anything

Only a few basic things can be supplied on the command line; a client-id (an arbitrary identifier of the client running Spydra), the main and additional JAR files for the job, and arguments for the job. For any use-case requiring more details, the user needs to create a JSON file and supply the path to that as a parameter. All the command-line options will override the corresponding options in the JSON config. Apart from all the command-line options and some general settings, it can also transparently pass along parameters to the gcloud command for cluster creation or job submission.

A job name can also be supplied. This will be sanitized and have a unique identifier attached to it, which will then be used as the Dataproc job ID. This is useful in finding the job in the Google Cloud Console.

The spydra-json argument

All properties that cannot be controlled via the few arguments of the submit command, can be set in the configuration file supplied with the --spydra-json parameter. The configuration file follows the structure of the cloud dataproc clusters create and cloud dataproc jubs submit commands and allows to set all the possible arguments for these commands. The basic structure looks as follows:

{
  "client_id": "spydra-test",                 # Spydra client id. Usually left out as set by the frameworks during runtime.
  "cluster_type": "dataproc",                 # Where to execute. Either dataproc or onpremise. Defaults to onpremise.
  "job_type": "hadoop",                       # Defaults to hadoop. For supported types see gcloud dataproc jobs submit --help
  "log_bucket": "spydra-test-logs",           # The bucket where Hadoop logs and history information are stored.
  "region": "europe-west1",                   # The region in which the cluster is spun up
  "cluster": {                                # All cluster related configuration
    "options": {                              # Map supporting all options from the gcloud dataproc clusters create command
      "project": "spydra-test",
      "num-workers": "13",
      "worker-machine-type": "n1-standard-2", # The default machine type used by Dataproc is n1-standard-8.
      "master-machine-type": "n1-standard-4"
    }
  },
  "submit": {                                 # All configuration related to job submission
    "job_args": [                             # Job arguments. Usually left out as set by the frameworks during runtime.
      "pi",
      "2",
      "2"
    ],
    "options": {                              # Map supporting all options from the gcloud dataproc jobs submit [hadoop,spark,hive...] command
      "jar": "/path/my.jar"                   # Path of the job jar file. Usually left out as set by the frameworks during runtime.
    }
  }
}

For details on the format of the JSON file see this schema and these examples.

Minimal Submission Example

Using only the command-line:

$ java -jar spydra/target/spydra-VERSION-jar-with-dependencies.jar submit --client-id simple-spydra-test --jar hadoop-mapreduce-examples.jar pi 8 100

JSON config:

$ cat examples.json
{
  "client_id": "simple-spydra-test",
  "cluster_type": "dataproc",
  "log_bucket": "spydra-test-logs",
  "region": "europe-west1",
  "cluster": {
    "options": {
      "project": "spydra-test"
    }
  },
  "submit": {
    "job_args": [
      "pi",
      "8",
      "100"
    ],
    "options": {
      "jar": "hadoop-mapreduce-examples.jar"
    }
  }
}
$ spydra submit --spydra-json example.json
Cluster Autoscaling (Experimental)

The Spydra autoscaler provides automatic sizing for Spydra clusters by adding enough preemptible worker nodes until a user supplied percentage of containers is running in parallel on the cluster. It enables cluster sizes to automatically adjust to growing resource needs over time and removes the need to come up with a good size when scheduling a job executed on Spydra. The autoscaler has two modes, upscale only and downscale.

Downscale will remove nodes when the cluster is not fully utilized. After choosing to downscale, it will wait for the downscale_timeout to allow active jobs to complete before terminating nodes. Note that though nodes may not have active YARN containers running, active jobs may be storing intermediate "shuffle" data on them. See Dataproc Graceful Downscale for more information.

To enable autoscaling, add an autoscaler section similar to the one below to your Spydra configuration.

{
  "cluster:" {...},
  "submit:" {...},
  "auto_scaler": {
    "interval": "2",        # Execution interval of the autoscaler in minutes
    "max": "20",            # Maximum number of workers
    "factor": "0.3",        # Percentage of YARN containers that should be running at any point in time 0.0 to 1.0.
    "downscale": "false",    # Whether or not to downscale.
    # If downscale is enabled, how long in minutes to wait for active jobs to finish
    # before terminating nodes and potentially interrupting those jobs.
    # Note that the autoscaler will not be able to add nodes during this interval.
    "downscale_timeout": "10"
  }
}
Static Cluster Submission

If you prefer to manage your Dataproc clusters manually you still can use Spydra for job submission and just skip dynamic cluster creation part. The only change that is needed to be done to Spydra configurations is that you need to specify the name of the cluster you want to submit the job to. Here is an example:

{
  "client_id": "simple-spydra-test",
  "cluster_type": "dataproc",
  "log_bucket": "spydra-test-logs",
  "submit": {
    "options": {
        "project": "spydra-test",
        "cluster": "NAME_OF_YOUR_CLUSTER"
    }
    "job_args": [
      "pi",
      "8",
      "100"
    ],
    "options": {
      "jar": "hadoop-mapreduce-examples.jar"
    }
  }
}

Also notice that project parameter is specified in submit/options section instead of cluster/options section.

Cluster Pooling (Experimental)

Disclaimer: The usage of the pooling is experimental!

The Spydra cluster pooling provides automatic pooling for Spydra clusters by selecting an existing cluster according to certain conditions.

To enable cluster pooling add a pooling section similar to the one below to your Spydra configuration.

{
  "cluster:" {...},
  "submit:" {...},
  "pooling": {
    "limit": 2,     # limit of concurrent clusters
    "max_age": "P1D"# A java.time.Duration for the maximum age of a cluster
  }
}
Submission Gotchas
  • You can use -- if you need to pass a parameter starting with dashes to your job, e.g. submit --jar=jar ... -- -myParam
  • Don't forget to specify = for arguments like --jar=$jar, otherwise the CLI parsing will break.
  • If the specified jar contains a Main-Class entry in it's manifest, specifying --mainclass will often lead to undesired behaviour, as the value of main-class will be passed as an argument to the application instead of invoking this class.
  • Not setting the default fs to GCS using the fs.defaultFS property can lead to crashes and undesired behavior as a lot of the frameworks use the default filesystem implementation instead of getting the correct filesystem for a given URI. It can also lead to the Crunch output committer working very slowly while copying all files from HDFS to GCS in a last non-distributed step.

Running an Embedded JobHistoryServer

The run-jhs is designed for an interactive exploration of the job execution. This command spawns an embedded JobHistoryServer that can display all jobs executed using the client id associated with your job submission. Familiarity with the use of JobHistoryServer from on-premise Hadoop is assumed. The JHS is accessible on default port 19888.

The client id used when executing the job, and the log bucket is required for running run-jhs command.

java -jar spydra/target/spydra-VERSION-jar-with-dependencies.jar run-jhs --clientid=JOB_CLIENT_ID --log-bucket=LOG_BUCKET

Retrieving Logs

The dump-logs command will dump logs for an application to stdout. Currently only full logs of the YARN application can be dumped - similarly to YARN logs when no specific container is specified. This is useful for processing/exploration with further tools in the shell.

The client id used when executing the job, the Hadoop application id, and the log bucket is required for running dump-logs command.

java -jar spydra/target/spydra-VERSION-jar-with-dependencies.jar dump-logs --clientid=MY_CLIENT_ID --username=HADOOP_USER_NAME --log-bucket=LOG_BUCKET --application=APPLICATION_ID

Retrieving History Data

The history files can be dumped as in regular Hadoop using the dump-history command.

The client id used when executing the job, the Hadoop application id, and the log bucket is required for running dump-history command.

java -jar spydra/target/spydra-VERSION-jar-with-dependencies.jar dump-history --clientid=MY_CLIENT_ID --log-bucket=LOG_BUCKET --application=APPLICATION_ID

Accessing Hadoop Web Interfaces for Ephemeral Clusters

Dataprocxy can be used to open the web interfaces of the Hadoop daemons of an ephemeral cluster as long as the cluster is running.

Building

Prerequisites

  • Java JDK 8
  • Maven 3.2.2
  • A Google Compute Platform project with Dataproc enabled
  • A Google Cloud Storage bucket for uploading init-actions. Ensure that this bucket is readable with all credentials used with Spydra.
  • A Google Cloud Storage bucket for storing integration test logs
  • JSON key for a service account with editor access to the project and bucket.
  • The environment variable GOOGLE_APPLICATION_CREDENTIALS pointing at the location of the service account JSON key
  • gcloud authenticated with the service account
  • gsutil authenticated with the service account

Integration Test Configuration

In order to run integration tests, basic configuration needs to be provided during the build process. Create a file with name integration-test-config.json similar to the one below and reference it during the maven invocation.

{
  "log_bucket": "YOUR_GCS_LOG_BUCKET",
  "region": "europe-west1"
}

Replace the YOUR_GCS_LOG_BUCKET with a bucket you have in your GCP project for storing the logs.

The project will be taken from the service account credentials, you do not need to specify the project parameter in integration-test-config.json (or elsewhere).

Notice that the file name must be exactly integration-test-config.json as that is what the integration test will search for when it is run on the maven verify phase.

Integration testing with application default credentials

Due to a limitation in the GCS Connector library, the integration tests do not work when using application default credentials, unless the tests are launched on a Google Compute Platform managed node. Scripts for launching the tests in a Google Kubernetes Engine cluster have been provided in integration_test_k8s

Build, Test and Package

In the following command, replace YOUR_INIT_ACTION_BUCKET with the bucket you created when setting up the prerequisites and YOUR_TEST_CONFIG_DIR with a directory name containing the file integration-test-config.json you created in the previous step. YOUR_TEST_CONFIG_DIR cannot be the same as the package root, so create a separate directory for this purpose. Then execute the maven command:

mvn clean install -Dinit-action-uri=gs://YOUR_INIT_ACTION_BUCKET/spydra -Dtest-configuration-dir=YOUR_TEST_CONFIG_DIR

Executing the maven command above will run the integration tests, and create a spydra-VERSION-jar-with-dependencies.jar under spydra/target that packages Spydra, which can be executed with java -jar. Using package instead of install can be used to run just unit-tests and package Spydra.

If you want to copy the init-scripts into the defined init-action bucket, activate profile install-init-scripts:

mvn clean install -Pinstall-init-scripts -Dinit-action-uri=gs://YOUR_INIT_ACTION_BUCKET/spydra -Dtest-configuration-dir=YOUR_TEST_CONFIG_DIR

Do not run Maven deploy step, as it will try to upload created packages into the Spotify owned repositories, which will fail unless you have Spotify specific credentials.

Communications

If you use Spydra and experience any issues, please create an issue under this Github project in here.

You can also ask for help and talk to us on Spydra related issues in Spotify FOSS Slack on channel #spydra.

Contributing

This project adheres to the Open Code of Conduct. By participating, you are expected to honor this code.

More Repositories

1

luigi

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Python
17,089
star
2

annoy

Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
C++
12,458
star
3

docker-gc

INACTIVE: Docker garbage collection of containers and images
Shell
5,068
star
4

pedalboard

πŸŽ› πŸ”Š A Python library for audio.
C++
4,823
star
5

chartify

Python library that makes it easy for data scientists to create charts.
Python
3,447
star
6

basic-pitch

A lightweight yet powerful audio-to-MIDI converter with pitch bend detection
Python
2,818
star
7

dockerfile-maven

MATURE: A set of Maven tools for dealing with Dockerfiles
Java
2,730
star
8

docker-maven-plugin

INACTIVE: A maven plugin for Docker
Java
2,652
star
9

scio

A Scala API for Apache Beam and Google Cloud Dataflow.
Scala
2,485
star
10

helios

Docker container orchestration platform
Java
2,097
star
11

web-api-examples

Basic examples to authenticate and fetch data using the Spotify Web API
HTML
1,889
star
12

HubFramework

DEPRECATED – Spotify’s component-driven UI framework for iOS
Objective-C
1,864
star
13

apollo

Java libraries for writing composable microservices
Java
1,648
star
14

dh-virtualenv

Python virtualenvs in Debian packages
Python
1,590
star
15

docker-client

INACTIVE: A simple docker client for the JVM
Java
1,425
star
16

docker-kafka

Kafka (and Zookeeper) in Docker
Shell
1,400
star
17

SPTPersistentCache

Everyone tries to implement a cache at some point in their iOS app’s lifecycle, and this is ours.
Objective-C
1,244
star
18

mobius

A functional reactive framework for managing state evolution and side-effects.
Java
1,205
star
19

sparkey

Simple constant key/value storage library, for read-heavy systems with infrequent large bulk inserts.
C
1,143
star
20

ruler

Gradle plugin which helps you analyze the size of your Android apps.
Kotlin
1,100
star
21

voyager

πŸ›°οΈ Voyager is an approximate nearest-neighbor search library for Python and Java with a focus on ease of use, simplicity, and deployability.
C++
1,090
star
22

XCMetrics

XCMetrics is the easiest way to collect Xcode build metrics and improve developer productivity.
Swift
1,079
star
23

web-api

This issue tracker is no longer used. Join us in the Spotify for Developers forum for support with the Spotify Web API ➑️ https://community.spotify.com/t5/Spotify-for-Developers/bd-p/Spotify_Developer
RAML
981
star
24

echoprint-codegen

Codegen for Echoprint
C++
948
star
25

snakebite

A pure python HDFS client
Python
859
star
26

heroic

The Heroic Time Series Database
Java
843
star
27

klio

Smarter data pipelines for audio.
Python
827
star
28

XCRemoteCache

Swift
815
star
29

apps-tutorial

A Spotify App that contains working examples of the use of Spotify Apps API
627
star
30

SPTDataLoader

The HTTP library used by the Spotify iOS client
Objective-C
624
star
31

ios-sdk

Spotify SDK for iOS
Objective-C
609
star
32

postgresql-metrics

Tool that extracts and provides metrics on your PostgreSQL database
Python
584
star
33

JniHelpers

Tools for writing great JNI code
C++
584
star
34

reactochart

πŸ“ˆ React chart component library πŸ“‰
JavaScript
548
star
35

Mobius.swift

A functional reactive framework for managing state evolution and side-effects [Swift implementation]
Swift
544
star
36

dockerfile-mode

An emacs mode for handling Dockerfiles
Emacs Lisp
520
star
37

threaddump-analyzer

A JVM threaddump analyzer
JavaScript
482
star
38

featran

A Scala feature transformation library for data science and machine learning
Scala
467
star
39

android-sdk

Spotify SDK for Android
HTML
440
star
40

echoprint-server

Server for the Echoprint audio fingerprint system
Java
398
star
41

web-scripts

DEPRECATED: A collection of base configs and CLI wrappers used to speed up development @ Spotify.
TypeScript
381
star
42

completable-futures

Utilities for working with futures in Java 8
Java
378
star
43

SpotifyLogin

Swift framework for authenticating with the Spotify API
Swift
344
star
44

ratatool

A tool for data sampling, data generation, and data diffing
Scala
334
star
45

fmt-maven-plugin

Opinionated Maven Plugin that formats your Java code.
Java
299
star
46

big-data-rosetta-code

Code snippets for solving common big data problems in various platforms. Inspired by Rosetta Code
Scala
286
star
47

trickle

A small library for composing asynchronous code
Java
284
star
48

coordinator

A visual interface for turning an SVG into XY coΓΆrdinates.
HTML
282
star
49

pythonflow

🐍 Dataflow programming for python.
Python
279
star
50

styx

"The path to execution", Styx is a service that schedules batch data processing jobs in Docker containers on Kubernetes.
Java
267
star
51

cstar

Apache Cassandra cluster orchestration tool for the command line
Python
254
star
52

netty-zmtp

A Netty implementation of ZMTP, the ZeroMQ Message Transport Protocol.
Java
242
star
53

ios-style

Guidelines for iOS development in use at Spotify
240
star
54

cassandra-reaper

Software to run automated repairs of cassandra
235
star
55

confidence

Python
232
star
56

spotify-web-api-ts-sdk

A Typescript SDK for the Spotify Web API with types for returned data.
TypeScript
231
star
57

docker-cassandra

Cassandra in Docker with fast startup
Shell
219
star
58

terraform-gke-kubeflow-cluster

Terraform module for creating GKE clusters to run Kubeflow
HCL
209
star
59

dns-java

DNS wrapper library that provides SRV lookup functionality
Java
203
star
60

linux

Spotify's Linux kernel for Debian-based systems
C
203
star
61

git-test

test your commits
Shell
202
star
62

SPStackedNav

[DEPRECATED] Navigation controller which represents its content in stacks of panes, rather than one at a time
Objective-C
195
star
63

basic-pitch-ts

A lightweight yet powerful audio-to-MIDI converter with pitch bend detection.
TypeScript
194
star
64

quickstart

A CommonJS module resolver, loader and compiler for node.js and browsers.
JavaScript
193
star
65

spotify-json

Fast and nice to use C++ JSON library.
C++
190
star
66

dbeam

DBeam exports SQL tables into Avro files using JDBC and Apache Beam
Java
181
star
67

flink-on-k8s-operator

Kubernetes operator for managing the lifecycle of Apache Flink and Beam applications.
Go
178
star
68

bazel-tools

Tools for dealing with very large Bazel-managed repositories
Java
165
star
69

lingon

A user friendly tool for building single-page JavaScript applications
JavaScript
162
star
70

dataenum

Algebraic data types in Java.
Java
159
star
71

magnolify

A collection of Magnolia add-on modules
Scala
157
star
72

async-google-pubsub-client

[SUNSET] Async Google Pubsub Client
Java
156
star
73

gcp-audit

A tool for auditing security properties of GCP projects.
Python
156
star
74

spark-bigquery

Google BigQuery support for Spark, SQL, and DataFrames
Scala
154
star
75

flo

A lightweight workflow definition library
Java
146
star
76

folsom

An asynchronous memcache client for Java
Java
143
star
77

should-up

Remove most of the "should" noise from your tests
JavaScript
143
star
78

missinglink

Build time tool for detecting link problems in java projects
Java
142
star
79

zoltar

Common library for serving TensorFlow, XGBoost and scikit-learn models in production.
Java
141
star
80

android-auth

Spotify authentication and authorization for Android. Part of the Spotify Android SDK.
HTML
139
star
81

proto-registry

An implementation of the Protobuf Registry API
TypeScript
139
star
82

futures-extra

Java library for working with Guava futures
Java
136
star
83

annoy-java

Approximate nearest neighbors in Java
Java
134
star
84

spotify-web-playback-sdk-example

React based example app that creates a new player in Spotify Connect to play music from in the browse using Spotify Web Playback SDK.
JavaScript
130
star
85

spotify-tensorflow

Provides Spotify-specific TensorFlow helpers
Python
124
star
86

docker-stress

Simple docker stress test and monitoring tools
Python
124
star
87

crtauth

a public key backed client/server authentication system
Python
118
star
88

redux-location-state

Utilities for reading & writing Redux store state to & from the URL
JavaScript
118
star
89

sparkey-java

Java implementation of the Sparkey key value store
Java
117
star
90

rspec-dns

Easily test your DNS with RSpec
Ruby
108
star
91

web-playback-sdk

This issue tracker is no longer used. Join us in the Spotify for Developers forum for support with the Spotify Web Playback SDK ➑️ https://community.spotify.com/t5/Spotify-for-Developers/bd-p/Spotify_Developer
108
star
92

ffwd-ruby

An event and metrics fast-forwarding agent.
Ruby
106
star
93

realbook

Easier audio-based machine learning with TensorFlow.
Python
106
star
94

github-java-client

A Java client to Github API
Java
105
star
95

gimme

Creating time bound IAM Conditions with ease and flair
Python
103
star
96

super-smash-brogp

Sends and withdraws BGP prefixes for fun.
Python
98
star
97

lighthouse-audit-service

TypeScript
93
star
98

noether

Scala Aggregators used for ML Model metrics monitoring
Scala
91
star
99

python-graphwalker

Python re-implementation of the graphwalker testing tool
Python
90
star
100

spotify-js-challenge

JavaScript
87
star