• Stars
    star
    479
  • Rank 88,228 (Top 2 %)
  • Language
    Scala
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated 18 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Chronon is a data platform for serving for AI/ML applications.

Chronon: A Data Platform for AI/ML

Chronon is a platform that abstracts away the complexity of data computation and serving for AI/ML applications. Users define features as transformation of raw data, then Chronon can perform batch and streaming computation, scalable backfills, low-latency serving, guaranteed correctness and consistency, as well as a host of observability and monitoring tools.

It allows you to utilize all of the data within your organization, from batch tables, event streams or services to power your AI/ML projects, without needing to worry about all the complex orchestration that this would usually entail.

More information about Chronon can be found at chronon.ai.

High Level

Platform Features

Online Serving

Chronon offers an API for realtime fetching which returns up-to-date values for your features. It supports:

  • Managed pipelines for batch and realtime feature computation and updates to the serving backend
  • Low latency serving of computed features
  • Scalable for high fanout feature sets

Backfills

ML practitioners often need historical views of feature values for model training and evaluation. Chronon's backfills are:

  • Scalable for large time windows
  • Resilient to highly skewed data
  • Point-in-time accurate such that consistency with online serving is guaranteed

Observability, monitoring and data quality

Chronon offers visibility into:

  • Data freshness - ensure that online values are being updated in realtime
  • Online/Offline consistency - ensure that backfill data for model training and evaluation is consistent with what is being observed in online serving

Complex transformations and windowed aggregations

Chronon supports a range of aggregation types. For a full list see the documentation here.

These aggregations can all be configured to be computed over arbitrary window sizes.

Quickstart

This section walks you through the steps to create a training dataset with Chronon, using a fabricated underlying raw dataset.

Includes:

  • Example implementation of the main API components for defining features - GroupBy and Join.
  • The workflow for authoring these entities.
  • The workflow for backfilling training data.
  • The workflows for uploading and serving this data.
  • The workflow for measuring consistency between backfilled training data and online inference data.

Does not include:

  • A deep dive on the various concepts and terminologies in Chronon. For that, please see the Introductory documentation.
  • Running streaming jobs.

Requirements

  • Docker

Setup

To get started with the Chronon, all you need to do is download the docker-compose.yml file and run it locally:

curl -o docker-compose.yml https://chronon.ai/docker-compose.yml
docker-compose up

Once you see some data printed with a only showing top 20 rows notice, you're ready to proceed with the tutorial.

Introduction

In this example, let's assume that we're a large online retailer, and we've detected a fraud vector based on users making purchases and later returning items. We want to train a model that will be called when the checkout flow commences and predicts whether this transaction is likely to result in a fraudulent return.

Raw data sources

Fabricated raw data is included in the data directory. It includes four tables:

  1. Users - includes basic information about users such as account created date; modeled as a batch data source that updates daily
  2. Purchases - a log of all purchases by users; modeled as a log table with a streaming (i.e. Kafka) event-bus counterpart
  3. Returns - a log of all returns made by users; modeled as a log table with a streaming (i.e. Kafka) event-bus counterpart
  4. Checkouts - a log of all checkout events; this is the event that drives our model predictions

Start a shell session in the Docker container

In a new terminal window, run:

docker-compose exec main bash

This will open a shell within the chronon docker container.

Chronon Development

Now that the setup steps are complete, we can start creating and testing various Chronon objects to define transformation and aggregations, and generate data.

Step 1 - Define some features

Let's start with three feature sets, built on top of our raw input sources.

Note: These python definitions are already in your chronon image. There's nothing for you to run until Step 3 - Backfilling Data when you'll run computation for these definitions.

Feature set 1: Purchases data features

We can aggregate the purchases log data to the user level, to give us a view into this user's previous activity on our platform. Specifically, we can compute SUMs COUNTs and AVERAGEs of their previous purchase amounts over various windows.

Because this feature is built upon a source that includes both a table and a topic, its features can be computed in both batch and streaming.

source = Source(
    events=EventSource(
        table="data.purchases", # This points to the log table with historical purchase events
        topic=None, # Streaming is not currently part of quickstart, but this would be where you define the topic for realtime events
        query=Query(
            selects=select("user_id","purchase_price"), # Select the fields we care about
            time_column="ts") # The event time
    ))

window_sizes = [Window(length=day, timeUnit=TimeUnit.DAYS) for day in [3, 14, 30]] # Define some window sizes to use below

v1 = GroupBy(
    sources=[source],
    keys=["user_id"], # We are aggregating by user
    aggregations=[Aggregation(
            input_column="purchase_price",
            operation=Operation.SUM,
            windows=window_sizes
        ), # The sum of purchases prices in various windows
        Aggregation(
            input_column="purchase_price",
            operation=Operation.COUNT,
            windows=window_sizes
        ), # The count of purchases in various windows
        Aggregation(
            input_column="purchase_price",
            operation=Operation.AVERAGE,
            windows=window_sizes
        ) # The average purchases by user in various windows
    ],
)

See the whole code file here: purchases GroupBy. This is also in your docker image. We'll be running computation for it and the other GroupBys in Step 3 - Backfilling Data.

Feature set 2: Returns data features

We perform a similar set of aggregations on returns data in the returns GroupBy. The code is not included here because it looks similar to the above example.

Feature set 3: User data features

Turning User data into features is a littler simpler, primarily because there are no aggregations to include. In this case, the primary key of the source data is the same as the primary key of the feature, so we're simply extracting column values rather than performing aggregations over rows:

source = Source(
    entities=EntitySource(
        snapshotTable="data.users", # This points to a table that contains daily snapshots of the entire product catalog
        query=Query(
            selects=select("user_id","account_created_ds","email_verified"), # Select the fields we care about
        )
    ))

v1 = GroupBy(
    sources=[source],
    keys=["user_id"], # Primary key is the same as the primary key for the source table
    aggregations=None # In this case, there are no aggregations or windows to define
) 

Taken from the users GroupBy.

Step 2 - Join the features together

Next, we need the features that we previously defined backfilled in a single table for model training. This can be achieved using the Join API.

For our use case, it's very important that features are computed as of the correct timestamp. Because our model runs when the checkout flow begins, we'll want to be sure to use the corresponding timestamp in our backfill, such that features values for model training logically match what the model will see in online inference.

Join is the API that drives feature backfills for training data. It primarilly performs the following functions:

  1. Combines many features together into a wide view (hence the name Join).
  2. Defines the primary keys and timestamps for which feature backfills should be performed. Chronon can then guarantee that feature values are correct as of this timestamp.
  3. Performs scalable backfills.

Here is what our join looks like:

source = Source(
    events=EventSource(
        table="data.checkouts", 
        query=Query(
            selects=select("user_id"), # The primary key used to join various GroupBys together
            time_column="ts",
            ) # The event time used to compute feature values as-of
    ))

v1 = Join(  
    left=source,
    right_parts=[JoinPart(group_by=group_by) for group_by in [purchases_v1, refunds_v1, users]] # Include the three GroupBys
)

Taken from the training_set Join.

The left side of the join is what defines the timestamps and primary keys for the backfill (notice that it is built on top of the checkout event, as dictated by our use case).

Note that this Join combines the above three GroupBys into one data definition. In the next step, we'll run the command to execute computation for this whole pipeline.

Step 3 - Backfilling Data

Once the join is defined, we compile it using this command:

compile.py --conf=joins/quickstart/training_set.py

This converts it into a thrift definition that we can submit to spark with the following command:

run.py --conf production/joins/quickstart/training_set.v1

The output of the backfill would contain the user_id and ts columns from the left source, as well as the 11 feature columns from the three GroupBys that we created.

Feature values would be computed for each user_id and ts on the left side, with guaranteed temporal accuracy. So, for example, if one of the rows on the left was for user_id = 123 and ts = 2023-10-01 10:11:23.195, then the purchase_price_avg_30d feature would be computed for that user with a precise 30 day window ending on that timestamp.

You can now query the backfilled data using the spark sql shell:

spark-sql

And then:

spark-sql> SELECT user_id, quickstart_returns_v1_refund_amt_sum_30d, quickstart_purchases_v1_purchase_price_sum_14d, quickstart_users_v1_email_verified from default.quickstart_training_set_v1 limit 100;

Note that this only selects a few columns. You can also run a select * from default.quickstart_training_set_v1 limit 100 to see all columns, however, note that the table is quite wide and the results might not be very readable on your screen.

To exit the sql shell you can run:

spark-sql> quit;

Online Flows

Now that we've created a join and backfilled data, the next step would be to train a model. That is not part of this tutorial, but assuming it was complete, the next step after that would be to productionize the model online. To do this, we need to be able to fetch feature vectors for model inference. That's what this next section covers.

Uploading data

In order to serve online flows, we first need the data uploaded to the online KV store. This is different than the backfill that we ran in the previous step in two ways:

  1. The data is not a historic backfill, but rather the most up-to-date feature values for each primary key.
  2. The datastore is a transactional KV store suitable for point lookups. We use MongoDB in the docker image, however you are free to integrate with a database of your choice.

Upload the purchases GroupBy:

run.py --mode upload --conf production/group_bys/quickstart/purchases.v1 --ds  2023-12-01

spark-submit --class ai.chronon.quickstart.online.Spark2MongoLoader --master local[*] /srv/onlineImpl/target/scala-2.12/mongo-online-impl-assembly-0.1.0-SNAPSHOT.jar default.quickstart_purchases_v1_upload mongodb://admin:admin@mongodb:27017/?authSource=admin

Upload the returns GroupBy:

run.py --mode upload --conf production/group_bys/quickstart/returns.v1 --ds  2023-12-01

spark-submit --class ai.chronon.quickstart.online.Spark2MongoLoader --master local[*] /srv/onlineImpl/target/scala-2.12/mongo-online-impl-assembly-0.1.0-SNAPSHOT.jar default.quickstart_returns_v1_upload mongodb://admin:admin@mongodb:27017/?authSource=admin

Upload Join Metadata

If we want to use the FetchJoin api rather than FetchGroupby, then we also need to upload the join metadata:

run.py --mode metadata-upload --conf production/joins/quickstart/training_set.v2

This makes it so that the online fetcher knows how to take a request for this join and break it up into individual GroupBy requests, returning the unified vector, similar to how the Join backfill produces the wide view table with all features.

Fetching Data

With the above entities defined, you can now easily fetch feature vectors with a simple API call.

Fetching a join:

run.py --mode fetch --type join --name quickstart/training_set.v2 -k '{"user_id":"5"}'

You can also fetch a single GroupBy (this would not require the Join metadata upload step performed earlier):

run.py --mode fetch --type group-by --name quickstart/purchases.v1 -k '{"user_id":"5"}'

For production, the Java client is usually embedded directly into services.

Map<String, String> keyMap = new HashMap<>();
keyMap.put("user_id", "123");
Fetcher.fetch_join(new Request("quickstart/training_set_v1", keyMap))

sample response

> '{"purchase_price_avg_3d":14.3241, "purchase_price_avg_14d":11.89352, ...}'

Note: This java code is not runnable in the docker env, it is just an illustrative example.

Log fetches and measure online/offline consistency

As discussed in the introductory sections of this README, one of Chronon's core guarantees is online/offline consistency. This means that the data that you use to train your model (offline) matches the data that the model sees for production inference (online).

A key element of this is temporal accuracy. This can be phrased as: when backfilling features, the value that is produced for any given timestamp provided by the left side of the join should be the same as what would have been returned online if that feature was fetched at that particular timestamp.

Chronon not only guarantees this temporal accuracy, but also offers a way to measure it.

The measurement pipeline starts with the logs of the online fetch requests. These logs include the primary keys and timestamp of the request, along with the fetched feature values. Chronon then passes the keys and timestamps to a Join backfill as the left side, asking the compute engine to backfill the feature values. It then compares the backfilled values to actual fetched values to measure consistency.

Step 1: log fetches

First, make sure you've ran a few fetch requests. Run:

run.py --mode fetch --type join --name quickstart/training_set.v2 -k '{"user_id":"5"}'

A few times to generate some fetches.

With that complete, you can run this to create a usable log table (these commands produce a logging hive table with the correct schema):

spark-submit --class ai.chronon.quickstart.online.MongoLoggingDumper --master local[*] /srv/onlineImpl/target/scala-2.12/mongo-online-impl-assembly-0.1.0-SNAPSHOT.jar default.chronon_log_table mongodb://admin:admin@mongodb:27017/?authSource=admin
compile.py --conf group_bys/quickstart/schema.py
run.py --mode backfill --conf production/group_bys/quickstart/schema.v1
run.py --mode log-flattener --conf production/joins/quickstart/training_set.v2 --log-table default.chronon_log_table --schema-table default.quickstart_schema_v1

This creates a default.quickstart_training_set_v2_logged table that contains the results of each of the fetch requests that you previously made, along with the timestamp at which you made them and the user that you requested.

Note: Once you run the above command, it will create and "close" the log partitions, meaning that if you make additional fetches on the same day (UTC time) it will not append. If you want to go back and generate more requests for online/offline consistency, you can drop the table (run DROP TABLE default.quickstart_training_set_v2_logged in a spark-sql shell) before rerunning the above command.

Now you can compute consistency metrics with this command:

run.py --mode consistency-metrics-compute --conf production/joins/quickstart/training_set.v2

This job takes will take the primary key(s) and timestamps from the log table (default.quickstart_training_set_v2_logged in this case), and uses those to create and run a join backfill. It then compares the backfilled results to the actual logged values that were fetched online

It produces two output tables:

  1. default.quickstart_training_set_v2_consistency: A human readable table that you can query to see the results of the consistency checks.
    1. You can enter a sql shell by running spark-sql from your docker bash sesion, then query the table.
    2. Note that it has many columns (multiple metrics per feature), so you might want to run a DESC default.quickstart_training_set_v2_consistency first, then select a few columns that you care about to query.
  2. default.quickstart_training_set_v2_consistency_upload: A list of KV bytes that is uploaded to the online KV store, that can be used to power online data quality monitoring flows. Not meant to be human readable.

Conclusion

Using chronon for your feature engineering work simplifies and improves your ML Workflow in a number of ways:

  1. You can define features in one place, and use those definitions both for training data backfills and for online serving.
  2. Backfills are automatically point-in-time correct, which avoids label leakage and inconsistencies between training data and online inference.
  3. Orchestration for batch and streaming pipelines to keep features up to date is made simple.
  4. Chronon exposes easy endpoints for feature fetching.
  5. Consistency is guaranteed and measurable.

For a more detailed view into the benefits of using Chronon, see Benefits of Chronon documentation.

Benefits of Chronon over other approaches

Chronon offers the most value to AI/ML practitioners who are trying to build "online" models that are serving requests in real-time as opposed to batch workflows.

Without Chronon, engineers working on these projects need to figure out how to get data to their models for training/eval as well as production inference. As the complexity of data going into these models increases (multiple sources, complex transformation such as windowed aggregations, etc), so does the infrastructure challenge of supporting this data plumbing.

Generally, we observed ML practitioners taking one of two approaches:

The log-and-wait approach

With this approach, users start with the data that is available in the online serving environment from which the model inference will run. Log relevant features to the data warehouse. Once enough data has accumulated, train the model on the logs, and serve with the same data.

Pros:

  • Features used to train the model are guaranteed to be available at serving time
  • The model can access service call features
  • The model can access data from the the request context

Cons:

  • It might take a long to accumulate enough data to train the model
  • Performing windowed aggregations is not always possible (running large range queries against production databases doesn't scale, same for event streams)
  • Cannot utilize the wealth of data already in the data warehouse
  • Maintaining data transformation logic in the application layer is messy

The replicate offline-online approach

With this approach, users train the model with data from the data warehouse, then figure out ways to replicate those features in the online environment.

Pros:

  • You can use a broad set of data for training
  • The data warehouse is well suited for large aggregations and other computationally intensive transformation

Cons:

  • Often very error prone, resulting in inconsistent data between training and serving
  • Requires maintaining a lot of complicated infrastructure to even get started with this approach,
  • Serving features with realtime updates gets even more complicated, especially with large windowed aggregations
  • Unlikely to scale well to many models

The Chronon approach

With Chronon you can use any data available in your organization, including everything in the data warehouse, any streaming source, service calls, etc, with guaranteed consistency between online and offline environments. It abstracts away the infrastructure complexity of orchestrating and maintining this data plumbing, so that users can simply define features in a simple API, and trust Chronon to handle the rest.

Contributing

We welcome contributions to the Chronon project! Please read CONTRIBUTING for details.

Support

Use the GitHub issue tracker for reporting bugs or feature requests. Join our community Discord channel for discussions, tips, and support.

More Repositories

1

javascript

JavaScript Style Guide
JavaScript
141,845
star
2

lottie-android

Render After Effects animations natively on Android and iOS, Web, and React Native
Java
34,600
star
3

lottie-web

Render After Effects animations natively on Web, Android and iOS, and React Native. http://airbnb.io/lottie/
JavaScript
29,643
star
4

lottie-ios

An iOS library to natively render After Effects vector animations
Swift
24,897
star
5

visx

🐯 visx | visualization components
TypeScript
18,609
star
6

react-sketchapp

render React components to Sketch ⚛️💎
TypeScript
14,951
star
7

react-dates

An easily internationalizable, mobile-friendly datepicker library for the web
JavaScript
11,630
star
8

epoxy

Epoxy is an Android library for building complex screens in a RecyclerView
Java
8,426
star
9

css

A mostly reasonable approach to CSS and Sass.
6,869
star
10

hypernova

A service for server-side rendering your JavaScript views
JavaScript
5,824
star
11

mavericks

Mavericks: Android on Autopilot
Kotlin
5,741
star
12

knowledge-repo

A next-generation curated knowledge sharing platform for data scientists and other technical professions.
Python
5,432
star
13

ts-migrate

A tool to help migrate JavaScript code quickly and conveniently to TypeScript
TypeScript
5,307
star
14

aerosolve

A machine learning package built for humans.
Scala
4,790
star
15

DeepLinkDispatch

A simple, annotation-based library for making deep link handling better on Android
Java
4,356
star
16

lottie

Lottie documentation for http://airbnb.io/lottie.
HTML
4,289
star
17

ruby

Ruby Style Guide
Ruby
3,711
star
18

polyglot.js

Give your JavaScript the ability to speak many languages.
JavaScript
3,644
star
19

MagazineLayout

A collection view layout capable of laying out views in vertically scrolling grids and lists.
Swift
3,232
star
20

native-navigation

Native navigation library for React Native applications
Java
3,127
star
21

streamalert

StreamAlert is a serverless, realtime data analysis framework which empowers you to ingest, analyze, and alert on data from any environment, using datasources and alerting logic you define.
Python
2,825
star
22

infinity

UITableViews for the web (DEPRECATED)
JavaScript
2,809
star
23

airpal

Web UI for PrestoDB.
Java
2,760
star
24

HorizonCalendar

A declarative, performant, iOS calendar UI component that supports use cases ranging from simple date pickers all the way up to fully-featured calendar apps.
Swift
2,656
star
25

swift

Airbnb's Swift Style Guide
Markdown
2,239
star
26

synapse

A transparent service discovery framework for connecting an SOA
Ruby
2,067
star
27

Showkase

🔦 Showkase is an annotation-processor based Android library that helps you organize, discover, search and visualize Jetpack Compose UI elements
Kotlin
2,018
star
28

paris

Define and apply styles to Android views programmatically
Kotlin
1,894
star
29

AirMapView

A view abstraction to provide a map user interface with various underlying map providers
Java
1,860
star
30

react-with-styles

Use CSS-in-JavaScript with themes for React without being tightly coupled to one implementation
JavaScript
1,697
star
31

rheostat

Rheostat is a www, mobile, and accessible slider component built with React
JavaScript
1,692
star
32

binaryalert

BinaryAlert: Serverless, Real-time & Retroactive Malware Detection.
Python
1,382
star
33

epoxy-ios

Epoxy is a suite of declarative UI APIs for building UIKit applications in Swift
Swift
1,142
star
34

nerve

A service registration daemon that performs health checks; companion to airbnb/synapse
Ruby
942
star
35

okreplay

📼 Record and replay OkHttp network interaction in your tests.
Groovy
775
star
36

RxGroups

Easily group RxJava Observables together and tie them to your Android Activity lifecycle
Java
693
star
37

prop-types

Custom React PropType validators that we use at Airbnb.
JavaScript
672
star
38

react-outside-click-handler

OutsideClickHandler component for React.
JavaScript
603
star
39

ResilientDecoding

This package makes your Decodable types resilient to decoding errors and allows you to inspect those errors.
Swift
580
star
40

babel-plugin-dynamic-import-node

Babel plugin to transpile import() to a deferred require(), for node
JavaScript
575
star
41

kafkat

KafkaT-ool
Ruby
504
star
42

babel-plugin-dynamic-import-webpack

Babel plugin to transpile import() to require.ensure, for Webpack
JavaScript
500
star
43

babel-plugin-inline-react-svg

A babel plugin that optimizes and inlines SVGs for your React Components.
JavaScript
474
star
44

lunar

🌗 React toolkit and design language for Airbnb open source and internal projects.
TypeScript
461
star
45

BuckSample

An example app showing how Buck can be used to build a simple iOS app.
Objective-C
460
star
46

SpinalTap

Change Data Capture (CDC) service
Java
428
star
47

artificial-adversary

🗣️ Tool to generate adversarial text examples and test machine learning models against them
Python
390
star
48

dynein

Airbnb's Open-source Distributed Delayed Job Queueing System
Java
383
star
49

hammerspace

Off-heap large object storage
Ruby
364
star
50

trebuchet

Trebuchet launches features at people
Ruby
313
star
51

reair

ReAir is a collection of easy-to-use tools for replicating tables and partitions between Hive data warehouses.
Java
278
star
52

zonify

a command line tool for generating DNS records from EC2 instances
Ruby
270
star
53

ottr

Serverless Public Key Infrastructure Framework
Python
266
star
54

omniduct

A toolkit providing a uniform interface for connecting to and extracting data from a wide variety of (potentially remote) data stores (including HDFS, Hive, Presto, MySQL, etc).
Python
249
star
55

hypernova-react

React bindings for Hypernova.
JavaScript
248
star
56

smartstack-cookbook

The chef recipes for running and testing Airbnb's SmartStack
Ruby
244
star
57

interferon

Signaling you about infrastructure or application issues
Ruby
239
star
58

prop-types-exact

For use with React PropTypes. Will error on any prop not explicitly specified.
JavaScript
237
star
59

backpack

A pack of UI components for Backbone projects. Grab your backpack and enjoy the Views.
HTML
223
star
60

babel-preset-airbnb

A babel preset for transforming your JavaScript for Airbnb
JavaScript
222
star
61

goji-js

React ❤️ Mini Program
TypeScript
213
star
62

react-with-direction

Components to provide and consume RTL or LTR direction in React
JavaScript
192
star
63

stemcell

Airbnb's EC2 instance creation and bootstrapping tool
Ruby
185
star
64

hypernova-ruby

Ruby client for Hypernova.
Ruby
141
star
65

kafka-statsd-metrics2

Send Kafka Metrics to StatsD.
Java
135
star
66

optica

A tool for keeping track of nodes in your infrastructure
Ruby
134
star
67

sparsam

Fast Thrift Bindings for Ruby
C++
125
star
68

js-shims

JS language shims used by Airbnb.
JavaScript
123
star
69

browser-shims

Browser and JS shims used by Airbnb.
JavaScript
118
star
70

bossbat

Stupid simple distributed job scheduling in node, backed by redis.
JavaScript
118
star
71

nimbus

Centralized CLI for JavaScript and TypeScript developer tools.
TypeScript
118
star
72

lottie-spm

Swift Package Manager support for Lottie, an iOS library to natively render After Effects vector animations
Ruby
106
star
73

twitter-commons-sample

A sample REST service based on Twitter Commons
Java
103
star
74

is-touch-device

Is the current JS environment a touch device?
JavaScript
90
star
75

rudolph

A serverless sync server for Santa, built on AWS
Go
73
star
76

hypernova-node

node.js client for Hypernova
JavaScript
73
star
77

plog

Fire-and-forget UDP logging service with custom Netty pipelines & extensive monitoring
Java
72
star
78

cloud-maker

Building castles in the sky
Ruby
67
star
79

react-create-hoc

Create a React Higher-Order Component (HOC) following best practices.
JavaScript
66
star
80

vulnture

Python
65
star
81

deline

An ES6 template tag that strips unwanted newlines from strings.
JavaScript
63
star
82

react-with-styles-interface-react-native

Interface to use react-with-styles with React Native
JavaScript
63
star
83

sputnik

Scala
61
star
84

mocha-wrap

Fluent pluggable interface for easily wrapping `describe` and `it` blocks in Mocha tests.
JavaScript
54
star
85

react-with-styles-interface-aphrodite

Interface to use react-with-styles with Aphrodite
JavaScript
54
star
86

eslint-plugin-react-with-styles

ESLint plugin for react-with-styles
JavaScript
49
star
87

sssp

Software distribution by way of S3 signed URLs
Haskell
47
star
88

alerts

An example alerts repo, for use with airbnb/interferon.
Ruby
46
star
89

apple-tv-auth

Example application to demonstrate how to build Apple TV style authentication.
Ruby
44
star
90

airbnb-spark-thrift

A library for loadling Thrift data into Spark SQL
Scala
43
star
91

jest-wrap

Fluent pluggable interface for easily wrapping `describe` and `it` blocks in Jest tests.
JavaScript
39
star
92

billow

Query AWS data without API credentials. Don't wait for a response.
Java
38
star
93

gosal

A Sal client written in Go
Go
36
star
94

backbone.baseview

DEPRECATED: A simple base view class for Backbone.View
JavaScript
34
star
95

anotherlens

News Deeply X Airbnb.Design - Another Lens
HTML
33
star
96

eslint-plugin-miniprogram

TypeScript
33
star
97

react-component-variations

JavaScript
33
star
98

react-with-styles-interface-css

📃 CSS interface for react-with-styles
JavaScript
31
star
99

puppet-munki

Puppet
29
star
100

transformpy

transformpy is a Python 2/3 module for doing transforms on "streams" of data
Python
29
star