• Stars
    star
    728
  • Rank 62,237 (Top 2 %)
  • Language
    Rust
  • License
    MIT License
  • Created over 3 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A STARK prover and verifier for arbitrary computations

Winterfell 🐺

A STARK prover and verifier for arbitrary computations.

WARNING: This is a research project. It has not been audited and may contain bugs and security flaws. This implementation is NOT ready for production use.

Overview

A STARK is a novel proof-of-computation scheme to create efficiently verifiable proofs of the correct execution of a computation. The scheme was developed by Eli Ben-Sasson, Michael Riabzev et al. at Technion - Israel Institute of Technology. STARKs do not require an initial trusted setup, and rely on very few cryptographic assumptions. See references for more info.

The aim of this project is to build a feature-rich, easy to use, and highly performant STARK prover which can generate integrity proofs for very large computations. STARK proof generation process is massively parallelizable, however, it also requires lots of RAM. For very large computations, amount of RAM available on a single machine may not be sufficient to efficiently generate a proof. Therefore, our final goal is to efficiently distribute proof generation across many machines.

Status and features

Winterfell is a fully-functional, multi-threaded STARK prover and verifier with the following nice properties:

A simple interface. The library provides a relatively simple interface for describing general computations. See usage for a quick tutorial, air crate for the description of the interface, and examples crate for a few real-world examples.

Randomized AIR support. The library supports multi-stage trace commitments, which enables support for randomized AIR. This greatly increases the expressivity of AIR constraints, and enables, among other things, multiset and permutation checks similar to the ones available in PLONKish systems.

Multi-threaded proof generation. When compiled with concurrent feature enabled, the proof generation process will run in multiple threads. The library also supports concurrent construction of execution trace tables. The performance section showcases the benefits of multi-threading.

Configurable fields. Both the base and the extension field for proof generation can be chosen dynamically. This simplifies fine-tuning of proof generation for specific performance and security targets. See math crate for description of currently available fields.

Configurable hash functions. The library allows dynamic selection of hash functions used in the STARK protocol. Currently, BLAKE3 and SHA3 hash functions are supported, and support for arithmetization-friendly hash function (e.g. Rescue) is planned.

WebAssembly support. The library is written in pure Rust and can be compiled to WebAssembly. The std standard library is enabled as feature by default for both prover and verifier crates. For WASM targets, one can compile with default features disabled by using --no-default-features flag.

Planned features

Over time, we hope extend the library with additional features:

Distributed prover. Distributed proof generation is the main priority of this project, and we hope to release an update containing it soon.

Perfect zero-knowledge. The current implementation provides succinct proofs but NOT perfect zero-knowledge. This means that, in its current form, the library may not be suitable for use cases where proofs must not leak any info about secret inputs.

Project structure

The project is organized into several crates like so:

Crate Description
examples Contains examples of generating/verifying proofs for several toy and real-world computations.
prover Contains an implementation of a STARK prover which can be used to generate computational integrity proofs.
verifier Contains an implementation of a STARK verifier which can verify proofs generated by the Winterfell prover.
winterfell Re-exports prover and verifier crates as a single create for simplified dependency management.
air Contains components needed to describe arbitrary computations in a STARK-specific format.
fri Contains implementation of a FRI prover and verifier. These are used internally by the STARK prover and verifier.
math Contains modules with math operations needed in STARK proof generation/verification. These include: finite field arithmetic, polynomial arithmetic, and FFTs.
crypto Contains modules with cryptographic operations needed in STARK proof generation/verification. Specifically: hash functions and Merkle trees.
utils Contains a set of utility traits, functions, and macros used throughout the library.

Usage

Generating STARK proofs for a computation is a relatively complicated process. This library aims to abstract away most of the complexity, however, the users are still expected to provide descriptions of their computations in a STARK-specific format. This format is called algebraic intermediate representation, or AIR, for short.

This library contains several higher-level constructs which make defining AIRs for computations a little easier, and there are also examples of AIRs for several computations available in the examples crate. However, the best way to understand the STARK proof generation process is to go through a trivial example from start to finish.

First, we'll need to pick a computation for which we'll be generating and verifying STARK proofs. To keep things simple, we'll use the following:

use winterfell::math::{fields::f128::BaseElement, FieldElement};

fn do_work(start: BaseElement, n: usize) -> BaseElement {
    let mut result = start;
    for _ in 1..n {
        result = result.exp(3) + BaseElement::new(42);
    }
    result
}

This computation starts with an element in a finite field and then, for the specified number of steps, cubes the element and adds value 42 to it.

Suppose, we run this computation for a million steps and get some result. Using STARKs we can prove that we did the work correctly without requiring any verifying party to re-execute the computation. Here is how to do it.

First, we need to define an execution trace for our computation. This trace should capture the state of the computation at every step of its execution. In our case, the trace is just a single column of intermediate values after each execution of the loop. For example, if we start with value 3 and run the computation for 1,048,576 (same as 220) steps, the execution trace will look like this:

Step State
0 3
1 69
2 328551
3 35465687262668193
4 237280320818395402166933071684267763523
...
1,048,575 247770943907079986105389697876176586605

To record the trace, we'll use the TraceTable struct provided by the library. The function below, is just a modified version of the do_work() function which records every intermediate state of the computation in the TraceTable struct:

use winterfell::{
    math::{fields::f128::BaseElement, FieldElement},
    TraceTable,
};

pub fn build_do_work_trace(start: BaseElement, n: usize) -> TraceTable<BaseElement> {
    // Instantiate the trace with a given width and length; this will allocate all
    // required memory for the trace
    let trace_width = 1;
    let mut trace = TraceTable::new(trace_width, n);

    // Fill the trace with data; the first closure initializes the first state of the
    // computation; the second closure computes the next state of the computation based
    // on its current state.
    trace.fill(
        |state| {
            state[0] = start;
        },
        |_, state| {
            state[0] = state[0].exp(3u32.into()) + BaseElement::new(42);
        },
    );

    trace
}

Next, we need to define algebraic intermediate representation (AIR) for our computation. This process is usually called arithmetization. We do this by implementing the Air trait. At the high level, the code below does three things:

  1. Defines what the public inputs for our computation should look like. These inputs are called "public" because they must be known to both, the prover and the verifier.
  2. Defines a transition function with a single transition constraint. This transition constraint must evaluate to zero for all valid state transitions, and to non-zero for any invalid state transition. The degree of this constraint is 3 (see more about constraint degrees here).
  3. Define two assertions against an execution trace of our computation. These assertions tie a specific set of public inputs to a specific execution trace (see more about assertions here).

For more information about arithmetization see air crate, and here is the actual code:

use winterfell::{
    math::{fields::f128::BaseElement, FieldElement, ToElements},
    Air, AirContext, Assertion, ByteWriter, EvaluationFrame, ProofOptions, TraceInfo,
    TransitionConstraintDegree,
};

// Public inputs for our computation will consist of the starting value and the end result.
pub struct PublicInputs {
    start: BaseElement,
    result: BaseElement,
}

// We need to describe how public inputs can be converted to field elements.
impl ToElements<BaseElement> for PublicInputs {
    fn to_elements(&self) -> Vec<BaseElement> {
        vec![self.start, self.result]
    }
}

// For a specific instance of our computation, we'll keep track of the public inputs and
// the computation's context which we'll build in the constructor. The context is used
// internally by the Winterfell prover/verifier when interpreting this AIR.
pub struct WorkAir {
    context: AirContext<BaseElement>,
    start: BaseElement,
    result: BaseElement,
}

impl Air for WorkAir {
    // First, we'll specify which finite field to use for our computation, and also how
    // the public inputs must look like.
    type BaseField = BaseElement;
    type PublicInputs = PublicInputs;

    // Here, we'll construct a new instance of our computation which is defined by 3 parameters:
    // starting value, number of steps, and the end result. Another way to think about it is
    // that an instance of our computation is a specific invocation of the do_work() function.
    fn new(trace_info: TraceInfo, pub_inputs: PublicInputs, options: ProofOptions) -> Self {
        // our execution trace should have only one column.
        assert_eq!(1, trace_info.width());

        // Our computation requires a single transition constraint. The constraint itself
        // is defined in the evaluate_transition() method below, but here we need to specify
        // the expected degree of the constraint. If the expected and actual degrees of the
        // constraints don't match, an error will be thrown in the debug mode, but in release
        // mode, an invalid proof will be generated which will not be accepted by any verifier.
        let degrees = vec![TransitionConstraintDegree::new(3)];

        // We also need to specify the exact number of assertions we will place against the
        // execution trace. This number must be the same as the number of items in a vector
        // returned from the get_assertions() method below.
        let num_assertions = 2;

        WorkAir {
            context: AirContext::new(trace_info, degrees, num_assertions, options),
            start: pub_inputs.start,
            result: pub_inputs.result,
        }
    }

    // In this method we'll define our transition constraints; a computation is considered to
    // be valid, if for all valid state transitions, transition constraints evaluate to all
    // zeros, and for any invalid transition, at least one constraint evaluates to a non-zero
    // value. The `frame` parameter will contain current and next states of the computation.
    fn evaluate_transition<E: FieldElement + From<Self::BaseField>>(
        &self,
        frame: &EvaluationFrame<E>,
        _periodic_values: &[E],
        result: &mut [E],
    ) {
        // First, we'll read the current state, and use it to compute the expected next state
        let current_state = &frame.current()[0];
        let next_state = current_state.exp(3u32.into()) + E::from(42u32);

        // Then, we'll subtract the expected next state from the actual next state; this will
        // evaluate to zero if and only if the expected and actual states are the same.
        result[0] = frame.next()[0] - next_state;
    }

    // Here, we'll define a set of assertions about the execution trace which must be satisfied
    // for the computation to be valid. Essentially, this ties computation's execution trace
    // to the public inputs.
    fn get_assertions(&self) -> Vec<Assertion<Self::BaseField>> {
        // for our computation to be valid, value in column 0 at step 0 must be equal to the
        // starting value, and at the last step it must be equal to the result.
        let last_step = self.trace_length() - 1;
        vec![
            Assertion::single(0, 0, self.start),
            Assertion::single(0, last_step, self.result),
        ]
    }

    // This is just boilerplate which is used by the Winterfell prover/verifier to retrieve
    // the context of the computation.
    fn context(&self) -> &AirContext<Self::BaseField> {
        &self.context
    }
}

Next, we need define our prover. This can be done by implementing [Prover] trait. The trait is pretty simple and has just a few required methods. Here is how our implementation could look like:

use winterfell::{
    math::{fields::f128::BaseElement, FieldElement},
    ProofOptions, Prover, Trace, TraceTable
};

// Our prover needs to hold STARK protocol parameters which are specified via ProofOptions
// struct.
struct WorkProver {
    options: ProofOptions
}

impl WorkProver {
    pub fn new(options: ProofOptions) -> Self {
        Self { options }
    }
}

// When implementing Prover trait we set the `Air` associated type to the AIR of the
// computation we defined previously, and set the `Trace` associated type to `TraceTable`
// struct as we don't need to define a custom trace for our computation.
impl Prover for WorkProver {
    type BaseField = BaseElement;
    type Air = WorkAir;
    type Trace = TraceTable<Self::BaseField>;
    type HashFn = Blake3_256<Self::BaseField>;

    // Our public inputs consist of the first and last value in the execution trace.
    fn get_pub_inputs(&self, trace: &Self::Trace) -> PublicInputs {
        let last_step = trace.length() - 1;
        PublicInputs {
            start: trace.get(0, 0),
            result: trace.get(0, last_step),
        }
    }

    fn options(&self) -> &ProofOptions {
        &self.options
    }
}

Now, we are finally ready to generate a STARK proof. The function below, will execute our computation, and will return the result together with the proof that the computation was executed correctly.

use winterfell::{
    math::{fields::f128::BaseElement, FieldElement},
    FieldExtension, HashFunction, ProofOptions, StarkProof,
};

pub fn prove_work() -> (BaseElement, StarkProof) {
    // We'll just hard-code the parameters here for this example.
    let start = BaseElement::new(3);
    let n = 1_048_576;

    // Build the execution trace and get the result from the last step.
    let trace = build_do_work_trace(start, n);
    let result = trace.get(0, n - 1);

    // Define proof options; these will be enough for ~96-bit security level.
    let options = ProofOptions::new(
        32, // number of queries
        8,  // blowup factor
        0,  // grinding factor
        FieldExtension::None,
        8,   // FRI folding factor
        128, // FRI max remainder length
    );

    // Instantiate the prover and generate the proof.
    let prover = WorkProver::new(options);
    let proof = prover.prove(trace).unwrap();

    (result, proof)
}

We can then give this proof (together with the public inputs) to anyone, and they can verify that we did in fact execute the computation and got the claimed result. They can do this like so:

pub fn verify_work(start: BaseElement, result: BaseElement, proof: StarkProof) {
    // The number of steps and options are encoded in the proof itself, so we
    // don't need to pass them explicitly to the verifier.
    let pub_inputs = PublicInputs { start, result };
    match winterfell::verify::<WorkAir, Blake3_256<Self::BaseField>>(proof, pub_inputs) {
        Ok(_) => println!("yay! all good!"),
        Err(_) => panic!("something went terribly wrong!"),
    }
}

That's all there is to it! As mentioned above, the examples crate contains examples of much more interesting computations (together with instructions on how to compile and run these examples). So, do check it out.

Performance

The Winterfell prover's performance depends on a large number of factors including the nature of the computation itself, efficiency of encoding the computation in AIR, proof generation parameters, hardware setup etc. Thus, the benchmarks below should be viewed as directional: they illustrate the general trends, but concrete numbers will be different for different computations, choices of parameters etc.

The computation we benchmark here is a chain of Rescue hash invocations (see examples for more info). The benchmarks were run on Intel Core i9-9980KH @ 2.4 GHz and 32 GB of RAM using all 8 cores.

Chain length Trace time 96 bit security 128 bit security R1CS equiv.
Proving time Proof size Proving time Proof size
210 0.1 sec 0.04 sec 51 KB 0.07 sec 102 KB 218 constr.
212 0.4 sec 0.14 sec 65 KB 0.25 sec 128 KB 220 constr.
214 1.4 sec 0.6 sec 80 KB 1 sec 156 KB 222 constr.
216 6 sec 2.5 sec 94 KB 4 sec 184 KB 224 constr.
218 24 sec 11 sec 112 KB 18 sec 216 KB 226 constr.
220 94 sec 50 sec 128 KB 89 sec 252 KB 228 constr.

A few remarks about these benchmarks:

  • Trace time is the time it takes to generate an execution trace for the computation. This time does not depend on the chosen security level. For this specific computation, trace generation must be sequential, and thus, cannot take advantage of multiple cores. However, for other computations, where execution trace can be generated in parallel, trace time would be much smaller in relation to the proving time (see below).
  • R1CS equiv. is a very rough estimate of how many R1CS constraints would be required for this computation. The assumption here is that a single invocation of Rescue hash function requires ~250 R1CS constraints.
  • Not included in the table, the time it takes to verify proofs in all benchmarks above is between 2 ms and 6 ms using a single CPU core.
  • As can be seen from the table, with STARKs, we can dynamically trade off proof size, proof security level, and proving time against each other.

Let's benchmark another example. This time our computation will consist of verifying many Lamport+ signatures (see example). This is a much more complicated computation. For comparison, execution trace for Rescue hash chain requires only 4 columns, but for Lamport+ signature verification we use 22 columns. The table below shows benchmarks for verifying different numbers of signatures on the same 8-core machine (at 123-bit security level).

# of signatures Trace time Proving time Prover RAM Proof size Verifier time
64 0.2 sec 1.2 sec 0.5 GB 110 KB 4.4 ms
128 0.4 sec 2.6 sec 1.0 GB 121 KB 4.4 ms
256 0.8 sec 5.3 sec 1.9 GB 132 KB 4.5 ms
512 1.6 sec 10.9 sec 3.8 GB 139 KB 4.9 ms
1024 3.2 sec 20.5 sec 7.6 GB 152 KB 5.9 ms

A few observations about these benchmarks:

  • Trace time and prover RAM (RAM consumed by the prover during proof generation) grow pretty much linearly with the size of the computation.
  • Proving time grows very slightly faster than linearly with the size of the computation.
  • Proof size and verifier time grow much slower than linearly (actually logarithmically) with the size of the computation.

Another difference between this example and Rescue hash chain is that we can generate execution trace for each signature verification independently, and thus, we can build the entire trace in parallel using multiple threads. In general, Winterfell prover performance scales nearly linearly with every additional CPU core. This is because nearly all steps of STARK proof generation process can be parallelized. The table below illustrates this relationship on the example of verifying 1024 Lamport+ signatures (at 123-bit security level). This time, our benchmark machine is AMD EPYC 7003 with 64 CPU cores.

Threads Trace time Proving time Total time (trace + proving) Improvement
1 28 sec 127 sec 155 sec 1x
2 14 sec 64 sec 78 sec 2x
4 6.7 sec 33 sec 39.7 sec 3.9x
8 3.8 sec 17 sec 20.8 sec 7.5x
16 2 sec 10.3 sec 12.3 sec 12.6x
32 1 sec 6 sec 7 sec 22.1x
64 0.6 sec 3.8 sec 4.4 sec 35.2x

References

If you are interested in learning how STARKs work under the hood, here are a few links to get you started. From the standpoint of this library, arithmetization is by far the most important concept to understand.

Vitalik Buterin's blog series on zk-STARKs:

Alan Szepieniec's STARK tutorial:

StarkWare's STARK Math blog series:

License

This project is MIT licensed.

Acknowledgements

The original codebase was developed by Irakliy Khaburzaniya (@irakliyk) with contributions from François Garillot (@huitseeker), Kevin Lewi (@kevinlewi), Konstantinos Chalkias (@kchalkias), and Jasleen Malvai (@Jasleen1).

More Repositories

1

react

The library for web and native user interfaces.
JavaScript
227,971
star
2

react-native

A framework for building native applications using React
C++
118,682
star
3

create-react-app

Set up a modern web app by running one command.
JavaScript
101,913
star
4

docusaurus

Easy to maintain open source documentation websites.
TypeScript
56,059
star
5

jest

Delightful JavaScript Testing.
TypeScript
41,554
star
6

rocksdb

A library that provides an embeddable, persistent key-value store for fast storage.
C++
28,328
star
7

folly

An open-source C++ library developed and used at Facebook.
C++
27,122
star
8

zstd

Zstandard - Fast real-time compression algorithm
C
22,448
star
9

flow

Adds static typing to JavaScript to improve developer productivity and code quality.
OCaml
22,068
star
10

lexical

Lexical is an extensible text editor framework that provides excellent reliability, accessibility and performance.
TypeScript
19,616
star
11

relay

Relay is a JavaScript framework for building data-driven React applications.
Rust
18,191
star
12

hhvm

A virtual machine for executing programs written in Hack.
Hack
18,048
star
13

prophet

Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.
Python
17,943
star
14

fresco

An Android library for managing images and the memory they use.
Java
17,041
star
15

yoga

Yoga is an embeddable layout engine targeting web standards.
C++
16,928
star
16

infer

A static analyzer for Java, C, C++, and Objective-C
OCaml
14,715
star
17

flipper

A desktop debugging platform for mobile developers.
TypeScript
13,221
star
18

watchman

Watches files and records, or triggers actions, when they change.
C++
12,294
star
19

react-devtools

An extension that allows inspection of React component hierarchy in the Chrome and Firefox Developer Tools.
11,030
star
20

hermes

A JavaScript engine optimized for running React Native.
C++
9,388
star
21

jscodeshift

A JavaScript codemod toolkit.
JavaScript
9,270
star
22

chisel

Chisel is a collection of LLDB commands to assist debugging iOS apps.
Python
9,090
star
23

buck

A fast build system that encourages the creation of small, reusable modules over a variety of platforms and languages.
Java
8,568
star
24

stylex

StyleX is the styling system for ambitious user interfaces.
JavaScript
8,333
star
25

proxygen

A collection of C++ HTTP libraries including an easy to use HTTP server.
C++
8,026
star
26

facebook-ios-sdk

Used to integrate the Facebook Platform with your iOS & tvOS apps.
Swift
7,720
star
27

litho

A declarative framework for building efficient UIs on Android.
Java
7,646
star
28

pyre-check

Performant type-checking for python.
OCaml
6,696
star
29

facebook-android-sdk

Used to integrate Android apps with Facebook Platform.
Kotlin
6,066
star
30

redex

A bytecode optimizer for Android apps
C++
5,991
star
31

sapling

A Scalable, User-Friendly Source Control System.
Rust
5,815
star
32

componentkit

A React-inspired view framework for iOS.
Objective-C++
5,746
star
33

fishhook

A library that enables dynamically rebinding symbols in Mach-O binaries running on iOS.
C
5,117
star
34

PathPicker

PathPicker accepts a wide range of input -- output from git commands, grep results, searches -- pretty much anything. After parsing the input, PathPicker presents you with a nice UI to select which files you're interested in. After that you can open them in your favorite editor or execute arbitrary commands.
Python
5,075
star
35

metro

πŸš‡ The JavaScript bundler for React Native
JavaScript
5,061
star
36

prop-types

Runtime type checking for React props and similar objects
JavaScript
4,446
star
37

idb

idb is a flexible command line interface for automating iOS simulators and devices
Objective-C
4,431
star
38

Haxl

A Haskell library that simplifies access to remote data, such as databases or web-based services.
Haskell
4,227
star
39

FBRetainCycleDetector

iOS library to help detecting retain cycles in runtime.
Objective-C++
4,190
star
40

memlab

A framework for finding JavaScript memory leaks and analyzing heap snapshots
TypeScript
4,187
star
41

duckling

Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.
Haskell
4,021
star
42

fbt

A JavaScript Internationalization Framework
JavaScript
3,849
star
43

regenerator

Source transformer enabling ECMAScript 6 generator functions in JavaScript-of-today.
JavaScript
3,817
star
44

buck2

Build system, successor to Buck
Rust
3,307
star
45

mcrouter

Mcrouter is a memcached protocol router for scaling memcached deployments.
C++
3,222
star
46

wangle

Wangle is a framework providing a set of common client/server abstractions for building services in a consistent, modular, and composable way.
C++
3,030
star
47

react-strict-dom

React Strict DOM (RSD) is a subset of React DOM, imperative DOM, and CSS that supports web and native targets
JavaScript
2,922
star
48

wdt

Warp speed Data Transfer (WDT) is an embeddedable library (and command line tool) aiming to transfer data between 2 systems as fast as possible over multiple TCP paths.
C++
2,836
star
49

igl

Intermediate Graphics Library (IGL) is a cross-platform library that commands the GPU. It provides a single low-level cross-platform interface on top of various graphics APIs (e.g. OpenGL, Metal and Vulkan).
C++
2,719
star
50

fbthrift

Facebook's branch of Apache Thrift, including a new C++ server.
C++
2,535
star
51

mysql-5.6

Facebook's branch of the Oracle MySQL database. This includes MyRocks.
C++
2,446
star
52

Ax

Adaptive Experimentation Platform
Python
2,272
star
53

fbjs

A collection of utility libraries used by other Meta JS projects.
JavaScript
1,953
star
54

jsx

The JSX specification is a XML-like syntax extension to ECMAScript.
HTML
1,945
star
55

react-native-website

The React Native website and docs
JavaScript
1,899
star
56

screenshot-tests-for-android

Generate fast deterministic screenshots during Android instrumentation tests
Java
1,733
star
57

idx

Library for accessing arbitrarily nested, possibly nullable properties on a JavaScript object.
JavaScript
1,686
star
58

TextLayoutBuilder

An Android library that allows you to build text layouts more easily.
Java
1,470
star
59

mvfst

An implementation of the QUIC transport protocol.
C++
1,433
star
60

SoLoader

Native code loader for Android
Java
1,300
star
61

facebook-python-business-sdk

Python SDK for Meta Marketing APIs
Python
1,240
star
62

ThreatExchange

Trust & Safety tools for working together to fight digital harms.
C++
1,170
star
63

CacheLib

Pluggable in-process caching engine to build and scale high performance services
C++
1,097
star
64

mariana-trench

A security focused static analysis tool for Android and Java applications.
C++
1,041
star
65

fatal

Fatal is a library for fast prototyping software in modern C++. It provides facilities to enhance the expressive power of C++. The library is heavily based on template meta-programming, while keeping the complexity under-the-hood.
C++
1,000
star
66

transform360

Transform360 is an equirectangular to cubemap transform for 360 video.
C
996
star
67

openr

Distributed platform for building autonomic network functions.
C++
883
star
68

fboss

Facebook Open Switching System Software for controlling network switches.
C++
851
star
69

ktfmt

A program that reformats Kotlin source code to comply with the common community standard for Kotlin code conventions.
Kotlin
818
star
70

facebook-php-business-sdk

PHP SDK for Meta Marketing API
PHP
810
star
71

pyre2

Python wrapper for RE2
C++
631
star
72

starlark-rust

A Rust implementation of the Starlark language
Rust
623
star
73

openbmc

OpenBMC is an open software framework to build a complete Linux image for a Board Management Controller (BMC).
C
615
star
74

SPARTA

SPARTA is a library of software components specially designed for building high-performance static analyzers based on the theory of Abstract Interpretation.
C++
609
star
75

time

Meta's Time libraries
Go
570
star
76

chef-cookbooks

Open source chef cookbooks.
Ruby
565
star
77

IT-CPE

Meta's Client Platform Engineering tools. Some of the tools we have written to help manage our fleet of client systems.
Ruby
554
star
78

dotslash

Simplified executable deployment
Rust
523
star
79

Rapid

The OpenStreetMap editor driven by open data, AI, and supercharged features
JavaScript
515
star
80

lexical-ios

Lexical iOS is an extensible text editor framework that integrates the APIs and philosophies from Lexical Web with a Swift API built on top of TextKit.
Swift
477
star
81

facebook-sdk-for-unity

The facebook sdk for unity.
C#
474
star
82

facebook-nodejs-business-sdk

Node.js SDK for Meta Marketing APIs
JavaScript
469
star
83

FAI-PEP

Facebook AI Performance Evaluation Platform
Python
384
star
84

facebook-java-business-sdk

Java SDK for Meta Marketing APIs
Java
379
star
85

chef-utils

Utilities related to Chef
Ruby
290
star
86

opaque-ke

An implementation of the OPAQUE password-authenticated key exchange protocol
Rust
275
star
87

dns

Collection of Meta's DNS Libraries
Go
257
star
88

facebook360_dep

Facebook360 Depth Estimation Pipeline - https://facebook.github.io/facebook360_dep
HTML
241
star
89

akd

An implementation of an auditable key directory
Rust
219
star
90

tac_plus

A Tacacs+ Daemon tested on Linux (CentOS) to run AAA via TACACS+ Protocol via IPv4 and IPv6.
C
207
star
91

facebook-ruby-business-sdk

Ruby SDK for Meta Marketing API
Ruby
204
star
92

usort

Safe, minimal import sorting for Python projects.
Python
171
star
93

grocery-delivery

The Grocery Delivery utility for managing cookbook uploads to distributed Chef backends.
Ruby
153
star
94

taste-tester

Software to manage a chef-zero instance and use it to test changes on production servers.
Ruby
146
star
95

TestSlide

A Python test framework
Python
143
star
96

sapp

Post Processor for Facebook Static Analysis Tools.
Python
127
star
97

homebrew-fb

OS X Homebrew formulas to install Meta open source software
Ruby
124
star
98

threat-research

Welcome to the Meta Threat Research Indicator Repository, a dedicated resource for the sharing of Indicators of Compromise (IOCs) and other threat indicators with the external research community
Python
124
star
99

ocamlrep

Sets of libraries and tools to write applications and libraries mixing OCaml and Rust. These libraries will help keeping your types and data structures synchronized, and enable seamless exchange between OCaml and Rust
Rust
121
star
100

squangle

SQuangLe is a C++ API for accessing MySQL servers
C++
121
star