• Stars
    star
    5,076
  • Rank 7,794 (Top 0.2 %)
  • Language
    C++
  • License
    Apache License 2.0
  • Created 2 months ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

lightweight, standalone C++ inference engine for Google's Gemma models.

gemma.cpp

gemma.cpp is a lightweight, standalone C++ inference engine for the Gemma foundation models from Google.

For additional information about Gemma, see ai.google.dev/gemma. Model weights, including gemma.cpp specific artifacts, are available on kaggle.

Who is this project for?

Modern LLM inference engines are sophisticated systems, often with bespoke capabilities extending beyond traditional neural network runtimes. With this comes opportunities for research and innovation through co-design of high level algorithms and low-level computation. However, there is a gap between deployment-oriented C++ inference runtimes, which are not designed for experimentation, and Python-centric ML research frameworks, which abstract away low-level computation through compilation.

gemma.cpp provides a minimalist implementation of Gemma 2B and 7B models, focusing on simplicity and directness rather than full generality. This is inspired by vertically-integrated model implementations such as ggml, llama.c, and llama.rs.

gemma.cpp targets experimentation and research use cases. It is intended to be straightforward to embed in other projects with minimal dependencies and also easily modifiable with a small ~2K LoC core implementation (along with ~4K LoC of supporting utilities). We use the Google Highway Library to take advantage of portable SIMD for CPU inference.

For production-oriented edge deployments we recommend standard deployment pathways using Python frameworks like JAX, Keras, PyTorch, and Transformers (all model variations here).

Community contributions large and small are welcome. This project follows Google's Open Source Community Guidelines.

Active development is currently done on the dev branch. Please open pull requests targeting dev branch instead of main, which is intended to be more stable.

Quick Start

System requirements

Before starting, you should have installed:

Building natively on Windows requires the Visual Studio 2012 Build Tools with the optional Clang/LLVM C++ frontend (clang-cl). This can be installed from the command line with winget:

winget install --id Kitware.CMake
winget install --id Microsoft.VisualStudio.2022.BuildTools --force --override "--passive --wait --add Microsoft.VisualStudio.Workload.VCTools;installRecommended --add Microsoft.VisualStudio.Component.VC.Llvm.Clang --add Microsoft.VisualStudio.Component.VC.Llvm.ClangToolset"

Step 1: Obtain model weights and tokenizer from Kaggle or Hugging Face Hub

Visit the Gemma model page on Kaggle and select Model Variations |> Gemma C++. On this tab, the Variation dropdown includes the options below. Note bfloat16 weights are higher fidelity, while 8-bit switched floating point weights enable faster inference. In general, we recommend starting with the -sfp checkpoints.

Alternatively, visit the gemma.cpp models on the Hugging Face Hub. First go the the model repository of the model of interest (see recommendations below). Then, click the Files and versions tab and download the model and tokenizer files. For programmatic downloading, if you have huggingface_hub installed, you can also download by running:

huggingface-cli login # Just the first time
huggingface-cli download google/gemma-2b-sfp-cpp --local-dir build/

2B instruction-tuned (it) and pre-trained (pt) models:

Model name Description
2b-it 2 billion parameter instruction-tuned model, bfloat16
2b-it-sfp 2 billion parameter instruction-tuned model, 8-bit switched floating point
2b-pt 2 billion parameter pre-trained model, bfloat16
2b-pt-sfp 2 billion parameter pre-trained model, 8-bit switched floating point

7B instruction-tuned (it) and pre-trained (pt) models:

Model name Description
7b-it 7 billion parameter instruction-tuned model, bfloat16
7b-it-sfp 7 billion parameter instruction-tuned model, 8-bit switched floating point
7b-pt 7 billion parameter pre-trained model, bfloat16
7b-pt-sfp 7 billion parameter pre-trained model, 8-bit switched floating point

Note

Important: We strongly recommend starting off with the 2b-it-sfp model to get up and running.

Step 2: Extract Files

If you downloaded the models from Hugging Face, skip to step 3.

After filling out the consent form, the download should proceed to retrieve a tar archive file archive.tar.gz. Extract files from archive.tar.gz (this can take a few minutes):

tar -xf archive.tar.gz

This should produce a file containing model weights such as 2b-it-sfp.sbs and a tokenizer file (tokenizer.spm). You may want to move these files to a convenient directory location (e.g. the build/ directory in this repo).

Step 3: Build

The build system uses CMake. To build the gemma inference runtime, create a build directory and generate the build files using cmake from the top-level project directory. Note if you previous ran cmake and are re-running with a different setting, be sure to clean out the build/ directory with rm -rf build/* (warning this will delete any other files in the build/ directory.

For the 8-bit switched floating point weights (sfp), run cmake with no options:

Unix-like Platforms

cmake -B build

or if you downloaded bfloat16 weights (any model without -sfp in the name), instead of running cmake with no options as above, run cmake with WEIGHT_TYPE set to highway's hwy::bfloat16_t type (this will be simplified in the future, we recommend using -sfp weights instead of bfloat16 for faster inference):

cmake -B build -DWEIGHT_TYPE=hwy::bfloat16_t

After running whichever of the above cmake invocations that is appropriate for your weights, you can enter the build/ directory and run make to build the ./gemma executable:

# Configure `build` directory
cmake --preset make

# Build project using make
cmake --build --preset make -j [number of parallel threads to use]

Replace [number of parallel threads to use] with a number - the number of cores available on your system is a reasonable heuristic. For example, make -j4 gemma will build using 4 threads. If the nproc command is available, you can use make -j$(nproc) gemma as a reasonable default for the number of threads.

If you aren't sure of the right value for the -j flag, you can simply run make gemma instead and it should still build the ./gemma executable.

Note

On Windows Subsystem for Linux (WSL) users should set the number of parallel threads to 1. Using a larger number may result in errors.

If the build is successful, you should now have a gemma executable in the build/ directory.

Windows

# Configure `build` directory
cmake --preset windows

# Build project using Visual Studio Build Tools
cmake --build --preset windows -j [number of parallel threads to use]

If the build is successful, you should now have a gemma.exe executable in the build/ directory.

Bazel

bazel build -c opt --cxxopt=-std=c++20 :gemma

If the build is successful, you should now have a gemma executable in the bazel-bin/ directory.

Step 4: Run

You can now run gemma from inside the build/ directory.

gemma has the following required arguments:

Argument Description Example value
--model The model type. 2b-it, 2b-pt, 7b-it, 7b-pt, ... (see above)
--compressed_weights The compressed weights file. 2b-it-sfp.sbs, ... (see above)
--tokenizer The tokenizer file. tokenizer.spm

gemma is invoked as:

./gemma \
--tokenizer [tokenizer file] \
--compressed_weights [compressed weights file] \
--model [2b-it or 2b-pt or 7b-it or 7b-pt or ...]

Example invocation for the following configuration:

  • Compressed weights file 2b-it-sfp.sbs (2B instruction-tuned model, 8-bit switched floating point).
  • Tokenizer file tokenizer.spm.
./gemma \
--tokenizer tokenizer.spm \
--compressed_weights 2b-it-sfp.sbs \
--model 2b-it

Troubleshooting and FAQs

Running ./gemma fails with "Failed to read cache gating_ein_0 (error 294) ..."

The most common problem is that cmake was built with the wrong weight type and gemma is attempting to load bfloat16 weights (2b-it, 2b-pt, 7b-it, 7b-pt) using the default switched floating point (sfp) or vice versa. Revisit step #3 and check that the cmake command used to build gemma was correct for the weights that you downloaded.

In the future we will handle model format handling from compile time to runtime to simplify this.

Problems building in Windows / Visual Studio

Currently if you're using Windows, we recommend building in WSL (Windows Subsystem for Linux). We are exploring options to enable other build configurations, see issues for active discussion.

Model does not respond to instructions and produces strange output

A common issue is that you are using a pre-trained model, which is not instruction-tuned and thus does not respond to instructions. Make sure you are using an instruction-tuned model (2b-it-sfp, 2b-it, 7b-it-sfp, 7b-it) and not a pre-trained model (any model with a -pt suffix).

How do I convert my fine-tune to a .sbs compressed model file?

We're working on a python script to convert a standard model format to .sbs, and hope have it available in the next week or so. Follow this issue for updates.

What are some easy ways to make the model run faster?

  1. Make sure you are using the 8-bit switched floating point -sfp models.
  2. If you're on a laptop, make sure power mode is set to maximize performance and saving mode is off. For most laptops, the power saving modes get activated automatically if the computer is not plugged in.
  3. Close other unused cpu-intensive applications.
  4. On macs, anecdotally we observe a "warm-up" ramp-up in speed as performance cores get engaged.
  5. Experiment with the --num_threads argument value. Depending on the device, larger numbers don't always mean better performance.

We're also working on algorithmic and optimization approaches for faster inference, stay tuned.

Usage

gemma has different usage modes, controlled by the verbosity flag.

All usage modes are currently interactive, triggering text generation upon newline input.

Verbosity Usage mode Details
--verbosity 0 Minimal Only prints generation output. Suitable as a CLI tool.
--verbosity 1 Default Standard user-facing terminal UI.
--verbosity 2 Detailed Shows additional developer and debug info.

Interactive Terminal App

By default, verbosity is set to 1, bringing up a terminal-based interactive interface when gemma is invoked:

$ ./gemma [...]
  __ _  ___ _ __ ___  _ __ ___   __ _   ___ _ __  _ __
 / _` |/ _ \ '_ ` _ \| '_ ` _ \ / _` | / __| '_ \| '_ \
| (_| |  __/ | | | | | | | | | | (_| || (__| |_) | |_) |
 \__, |\___|_| |_| |_|_| |_| |_|\__,_(_)___| .__/| .__/
  __/ |                                    | |   | |
 |___/                                     |_|   |_|

tokenizer                     : tokenizer.spm
compressed_weights            : 2b-it-sfp.sbs
model                         : 2b-it
weights                       : [no path specified]
max_tokens                    : 3072
max_generated_tokens          : 2048

*Usage*
  Enter an instruction and press enter (%C reset conversation, %Q quits).

*Examples*
  - Write an email to grandma thanking her for the cookies.
  - What are some historical attractions to visit around Massachusetts?
  - Compute the nth fibonacci number in javascript.
  - Write a standup comedy bit about WebGPU programming.

> What are some outdoorsy places to visit around Boston?

[ Reading prompt ] .....................


**Boston Harbor and Islands:**

* **Boston Harbor Islands National and State Park:** Explore pristine beaches, wildlife, and maritime history.
* **Charles River Esplanade:** Enjoy scenic views of the harbor and city skyline.
* **Boston Harbor Cruise Company:** Take a relaxing harbor cruise and admire the city from a different perspective.
* **Seaport Village:** Visit a charming waterfront area with shops, restaurants, and a seaport museum.

**Forest and Nature:**

* **Forest Park:** Hike through a scenic forest with diverse wildlife.
* **Quabbin Reservoir:** Enjoy boating, fishing, and hiking in a scenic setting.
* **Mount Forest:** Explore a mountain with breathtaking views of the city and surrounding landscape.

...

Usage as a Command Line Tool

For using the gemma executable as a command line tool, it may be useful to create an alias for gemma.cpp with arguments fully specified:

alias gemma2b="~/gemma.cpp/build/gemma -- --tokenizer ~/gemma.cpp/build/tokenizer.spm --compressed_weights ~/gemma.cpp/build/2b-it-sfp.sbs --model 2b-it --verbosity 0"

Replace the above paths with your own paths to the model and tokenizer paths from the download.

Here is an example of prompting gemma with a truncated input file (using a gemma2b alias like defined above):

cat configs.h | tail -35 | tr '\n' ' ' | xargs -0 echo "What does this C++ code do: " | gemma2b

Note

CLI usage of gemma.cpp is experimental and should take context length limitations into account.

The output of the above command should look like:

$ cat configs.h | tail -35 | tr '\n' ' ' | xargs -0 echo "What does this C++ code do: " | gemma2b
[ Reading prompt ] ......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
The code defines two C++ structs, `ConfigGemma7B` and `ConfigGemma2B`, which are used for configuring a deep learning model.

**ConfigGemma7B**:

* `kSeqLen`: Stores the length of the sequence to be processed. It's set to 7168.
* `kVocabSize`: Stores the size of the vocabulary, which is 256128.
* `kLayers`: Number of layers in the deep learning model. It's set to 28.
* `kModelDim`: Dimension of the model's internal representation. It's set to 3072.
* `kFFHiddenDim`: Dimension of the feedforward and recurrent layers' hidden representations. It's set to 16 * 3072 / 2.

**ConfigGemma2B**:

* `kSeqLen`: Stores the length of the sequence to be processed. It's also set to 7168.
* `kVocabSize`: Size of the vocabulary, which is 256128.
* `kLayers`: Number of layers in the deep learning model. It's set to 18.
* `kModelDim`: Dimension of the model's internal representation. It's set to 2048.
* `kFFHiddenDim`: Dimension of the feedforward and recurrent layers' hidden representations. It's set to 16 * 2048 / 2.

These structs are used to configure a deep learning model with specific parameters for either Gemma7B or Gemma2B architecture.

Incorporating gemma.cpp as a Library in your Project

The easiest way to incorporate gemma.cpp in your own project is to pull in gemma.cpp and dependencies using FetchContent. You can add the following to your CMakeLists.txt:

include(FetchContent)

FetchContent_Declare(sentencepiece GIT_REPOSITORY https://github.com/google/sentencepiece GIT_TAG 53de76561cfc149d3c01037f0595669ad32a5e7c)
FetchContent_MakeAvailable(sentencepiece)

FetchContent_Declare(gemma GIT_REPOSITORY https://github.com/google/gemma.cpp GIT_TAG origin/main)
FetchContent_MakeAvailable(gemma)

FetchContent_Declare(highway GIT_REPOSITORY https://github.com/google/highway.git GIT_TAG da250571a45826b21eebbddc1e50d0c1137dee5f)
FetchContent_MakeAvailable(highway)

Note for the gemma.cpp GIT_TAG, you may replace origin/main for a specific commit hash if you would like to pin the library version.

After your executable is defined (substitute your executable name for [Executable Name] below):

target_link_libraries([Executable Name] libgemma hwy hwy_contrib sentencepiece)
FetchContent_GetProperties(gemma)
FetchContent_GetProperties(sentencepiece)
target_include_directories([Executable Name] PRIVATE ${gemma_SOURCE_DIR})
target_include_directories([Executable Name] PRIVATE ${sentencepiece_SOURCE_DIR})

Building gemma.cpp as a Library

gemma.cpp can also be used as a library dependency in your own project. The shared library artifact can be built by modifying the make invocation to build the libgemma target instead of gemma.

Note

If you are using gemma.cpp in your own project with the FetchContent steps in the previous section, building the library is done automatically by cmake and this section can be skipped.

First, run cmake:

cmake -B build

Then, run make with the libgemma target:

cd build
make -j [number of parallel threads to use] libgemma

If this is successful, you should now have a libgemma library file in the build/ directory. On Unix platforms, the filename is libgemma.a.

Independent Projects Using gemma.cpp

Some independent projects using gemma.cpp:

If you would like to have your project included, feel free to get in touch or submit a PR with a README.md edit.

Acknowledgements and Contacts

gemma.cpp was started in fall 2023 by Austin Huang and Jan Wassenberg, and subsequently released February 2024 thanks to contributions from Phil Culliton, Paul Chang, and Dan Zheng.

This is not an officially supported Google product.

More Repositories

1

material-design-icons

Material Design icons by Google (Material Symbols)
49,776
star
2

guava

Google core libraries for Java
Java
48,313
star
3

zx

A tool for writing better scripts
JavaScript
37,928
star
4

styleguide

Style guides for Google-originated open-source projects
HTML
36,487
star
5

leveldb

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.
C++
33,564
star
6

material-design-lite

Material Design Components in HTML/CSS/JS
HTML
32,283
star
7

googletest

GoogleTest - Google Testing and Mocking Framework
C++
32,215
star
8

jax

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Python
27,869
star
9

python-fire

Python Fire is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object.
Python
26,112
star
10

comprehensive-rust

This is the Rust course used by the Android team at Google. It provides you the material to quickly teach Rust.
Rust
25,973
star
11

mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
C++
25,320
star
12

gson

A Java serialization/deserialization library to convert Java Objects into JSON and back
Java
22,945
star
13

flatbuffers

FlatBuffers: Memory Efficient Serialization Library
C++
21,883
star
14

iosched

The Google I/O Android App
Kotlin
21,792
star
15

ExoPlayer

An extensible media player for Android
Java
21,465
star
16

eng-practices

Google's Engineering Practices documentation
19,741
star
17

web-starter-kit

Web Starter Kit - a workflow for multi-device websites
HTML
18,434
star
18

flexbox-layout

Flexbox for Android
Kotlin
18,141
star
19

fonts

Font files available from Google Fonts, and a public issue tracker for all things Google Fonts
HTML
17,588
star
20

filament

Filament is a real-time physically based rendering engine for Android, iOS, Windows, Linux, macOS, and WebGL2
C++
16,946
star
21

cadvisor

Analyzes resource usage and performance characteristics of running containers.
Go
16,273
star
22

libphonenumber

Google's common Java, C++ and JavaScript library for parsing, formatting, and validating international phone numbers.
C++
15,728
star
23

gvisor

Application Kernel for Containers
Go
15,047
star
24

WebFundamentals

Former git repo for WebFundamentals on developers.google.com
JavaScript
13,842
star
25

yapf

A formatter for Python files
Python
13,560
star
26

tink

Tink is a multi-language, cross-platform, open source library that provides cryptographic APIs that are secure, easy to use correctly, and hard(er) to misuse.
Java
13,318
star
27

deepdream

13,212
star
28

brotli

Brotli compression format
TypeScript
12,921
star
29

guetzli

Perceptual JPEG encoder
C++
12,863
star
30

guice

Guice (pronounced 'juice') is a lightweight dependency injection framework for Java 11 and above, brought to you by Google.
Java
12,342
star
31

wire

Compile-time Dependency Injection for Go
Go
12,222
star
32

blockly

The web-based visual programming editor.
TypeScript
12,067
star
33

sanitizers

AddressSanitizer, ThreadSanitizer, MemorySanitizer
C
10,754
star
34

grumpy

Grumpy is a Python to Go source code transcompiler and runtime.
Go
10,464
star
35

or-tools

Google's Operations Research tools:
C++
10,405
star
36

dopamine

Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
Jupyter Notebook
10,367
star
37

auto

A collection of source code generators for Java.
Java
10,234
star
38

go-github

Go library for accessing the GitHub v3 API
Go
9,941
star
39

oss-fuzz

OSS-Fuzz - continuous fuzzing for open source software.
Shell
9,859
star
40

go-cloud

The Go Cloud Development Kit (Go CDK): A library and tools for open cloud development in Go.
Go
9,314
star
41

sentencepiece

Unsupervised text tokenizer for Neural Network-based text generation.
C++
8,657
star
42

re2

RE2 is a fast, safe, thread-friendly alternative to backtracking regular expression engines like those used in PCRE, Perl, and Python. It is a C++ library.
C++
8,190
star
43

traceur-compiler

Traceur is a JavaScript.next-to-JavaScript-of-today compiler
JavaScript
8,182
star
44

tsunami-security-scanner

Tsunami is a general purpose network security scanner with an extensible plugin system for detecting high severity vulnerabilities with high confidence.
Java
8,086
star
45

trax

Trax — Deep Learning with Clear Code and Speed
Python
7,943
star
46

skia

Skia is a complete 2D graphic library for drawing Text, Geometries, and Images.
C++
7,874
star
47

benchmark

A microbenchmark support library
C++
7,812
star
48

android-classyshark

Android and Java bytecode viewer
Java
7,468
star
49

pprof

pprof is a tool for visualization and analysis of profiling data
Go
7,408
star
50

closure-compiler

A JavaScript checker and optimizer.
Java
7,245
star
51

agera

Reactive Programming for Android
Java
7,227
star
52

accompanist

A collection of extension libraries for Jetpack Compose
Kotlin
7,210
star
53

magika

Detect file content types with deep learning
Python
7,171
star
54

flutter-desktop-embedding

Experimental plugins for Flutter for Desktop
C++
7,109
star
55

latexify_py

A library to generate LaTeX expression from Python code.
Python
6,953
star
56

diff-match-patch

Diff Match Patch is a high-performance library in multiple languages that manipulates plain text.
Python
6,918
star
57

lovefield

Lovefield is a relational database for web apps. Written in JavaScript, works cross-browser. Provides SQL-like APIs that are fast, safe, and easy to use.
JavaScript
6,847
star
58

glog

C++ implementation of the Google logging module
C++
6,797
star
59

jsonnet

Jsonnet - The data templating language
Jsonnet
6,742
star
60

error-prone

Catch common Java mistakes as compile-time errors
Java
6,690
star
61

model-viewer

Easily display interactive 3D models on the web and in AR!
TypeScript
6,473
star
62

gops

A tool to list and diagnose Go processes currently running on your system
Go
6,375
star
63

draco

Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
C++
6,188
star
64

automl

Google Brain AutoML
Jupyter Notebook
6,153
star
65

gopacket

Provides packet processing capabilities for Go
Go
6,082
star
66

physical-web

The Physical Web: walk up and use anything
Java
6,017
star
67

j2objc

A Java to iOS Objective-C translation tool and runtime.
Java
5,976
star
68

grafika

Grafika test app
Java
5,964
star
69

snappy

A fast compressor/decompressor
C++
5,940
star
70

ios-webkit-debug-proxy

A DevTools proxy (Chrome Remote Debugging Protocol) for iOS devices (Safari Remote Web Inspector).
C
5,848
star
71

osv-scanner

Vulnerability scanner written in Go which uses the data provided by https://osv.dev
Go
5,826
star
72

seesaw

Seesaw v2 is a Linux Virtual Server (LVS) based load balancing platform.
Go
5,599
star
73

seq2seq

A general-purpose encoder-decoder framework for Tensorflow
Python
5,577
star
74

EarlGrey

🍵 iOS UI Automation Test Framework
Objective-C
5,570
star
75

flax

Flax is a neural network library for JAX that is designed for flexibility.
Python
5,493
star
76

google-java-format

Reformats Java source code to comply with Google Java Style.
Java
5,366
star
77

wireit

Wireit upgrades your npm/pnpm/yarn scripts to make them smarter and more efficient.
TypeScript
5,280
star
78

battery-historian

Battery Historian is a tool to analyze battery consumers using Android "bugreport" files.
Go
5,249
star
79

clusterfuzz

Scalable fuzzing infrastructure.
Python
5,201
star
80

bbr

5,156
star
81

gumbo-parser

An HTML5 parsing library in pure C99
HTML
5,141
star
82

syzkaller

syzkaller is an unsupervised coverage-guided kernel fuzzer
Go
5,111
star
83

git-appraise

Distributed code review system for Git repos
Go
5,090
star
84

google-authenticator

Open source version of Google Authenticator (except the Android app)
Java
5,077
star
85

uuid

Go package for UUIDs based on RFC 4122 and DCE 1.1: Authentication and Security Services.
Go
4,994
star
86

gts

☂️ TypeScript style guide, formatter, and linter.
TypeScript
4,930
star
87

gemma_pytorch

The official PyTorch implementation of Google's Gemma models
Python
4,920
star
88

closure-library

Google's common JavaScript library
JavaScript
4,837
star
89

cameraview

[DEPRECATED] Easily integrate Camera features into your Android app
Java
4,734
star
90

grr

GRR Rapid Response: remote live forensics for incident response
Python
4,641
star
91

liquidfun

2D physics engine for games
C++
4,559
star
92

pytype

A static type analyzer for Python code
Python
4,528
star
93

gxui

An experimental Go cross platform UI library.
Go
4,450
star
94

bloaty

Bloaty: a size profiler for binaries
C++
4,386
star
95

clasp

🔗 Command Line Apps Script Projects
TypeScript
4,336
star
96

ko

Build and deploy Go applications on Kubernetes
Go
4,329
star
97

santa

A binary authorization and monitoring system for macOS
Objective-C
4,288
star
98

google-ctf

Google CTF
Go
4,246
star
99

tamperchrome

Tamper Dev is an extension that allows you to intercept and edit HTTP/HTTPS requests and responses as they happen without the need of a proxy. Works across all operating systems (including Chrome OS).
TypeScript
4,148
star
100

end-to-end

End-To-End is a crypto library to encrypt, decrypt, digital sign, and verify signed messages (implementing OpenPGP)
JavaScript
4,126
star