• Stars
    star
    2,746
  • Rank 16,585 (Top 0.4 %)
  • Language
    Rust
  • License
    Apache License 2.0
  • Created about 1 year ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A blazing fast inference solution for text embeddings models

Text Embeddings Inference

GitHub Repo stars Swagger API documentation

A blazing fast inference solution for text embeddings models.

Benchmark for BAAI/bge-base-en-v1.5 on an Nvidia A10 with a sequence length of 512 tokens:

Table of contents

Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5. TEI implements many features such as:

  • No model graph compilation step
  • Small docker images and fast boot times. Get ready for true serverless!
  • Token based dynamic batching
  • Optimized transformers code for inference using Flash Attention, Candle and cuBLASLt
  • Safetensors weight loading
  • Production ready (distributed tracing with Open Telemetry, Prometheus metrics)

Get Started

Supported Models

Text Embeddings

You can use any JinaBERT model with Alibi or absolute positions or any BERT, CamemBERT, RoBERTa, or XLM-RoBERTa model with absolute positions in text-embeddings-inference.

Support for other model types will be added in the future.

Examples of supported models:

MTEB Rank Model Type Model ID
1 Bert BAAI/bge-large-en-v1.5
2 BAAI/bge-base-en-v1.5
3 llmrails/ember-v1
4 thenlper/gte-large
5 thenlper/gte-base
6 intfloat/e5-large-v2
7 BAAI/bge-small-en-v1.5
10 intfloat/e5-base-v2
11 XLM-RoBERTa intfloat/multilingual-e5-large
N/A JinaBERT jinaai/jina-embeddings-v2-base-en
N/A JinaBERT jinaai/jina-embeddings-v2-small-en

You can explore the list of best performing text embeddings models here.

Sequence Classification and Re-Ranking

text-embeddings-inference v0.4.0 added support for CamemBERT, RoBERTa and XLM-RoBERTa Sequence Classification models.

Example of supported sequence classification models:

Task Model Type Model ID Revision
Re-Ranking XLM-RoBERTa BAAI/bge-reranker-large refs/pr/4
Re-Ranking XLM-RoBERTa BAAI/bge-reranker-base refs/pr/5
Sentiment Analysis RoBERTa SamLowe/roberta-base-go_emotions

Docker

model=BAAI/bge-large-en-v1.5
revision=refs/pr/5
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.5 --model-id $model --revision $revision

And then you can make requests like

curl 127.0.0.1:8080/embed \
    -X POST \
    -d '{"inputs":"What is Deep Learning?"}' \
    -H 'Content-Type: application/json'

Note: To use GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 12.0 or higher.

To see all options to serve your models:

text-embeddings-router --help
Usage: text-embeddings-router [OPTIONS]

Options:
      --model-id <MODEL_ID>
          The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `thenlper/gte-base`. 
          Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of 
          transformers

          [env: MODEL_ID=]
          [default: thenlper/gte-base]

      --revision <REVISION>
          The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id 
          or a branch like `refs/pr/2`

          [env: REVISION=]

      --tokenization-workers <TOKENIZATION_WORKERS>
          Optionally control the number of tokenizer workers used for payload tokenization, validation and truncation. 
          Default to the number of CPU cores on the machine

          [env: TOKENIZATION_WORKERS=]

      --dtype <DTYPE>
          The dtype to be forced upon the model

          [env: DTYPE=]
          [possible values: float16, float32]

      --pooling <POOLING>
          Optionally control the pooling method for embedding models.

          If `pooling` is not set, the pooling configuration will be parsed from the model `1_Pooling/config.json` 
          configuration.

          If `pooling` is set, it will override the model pooling configuration

          [env: POOLING=]
          [possible values: cls, mean]

      --max-concurrent-requests <MAX_CONCURRENT_REQUESTS>
          The maximum amount of concurrent requests for this particular deployment. 
          Having a low limit will refuse clients requests instead of having them wait for too long and is usually good 
          to handle backpressure correctly

          [env: MAX_CONCURRENT_REQUESTS=]
          [default: 512]

      --max-batch-tokens <MAX_BATCH_TOKENS>
          **IMPORTANT** This is one critical control to allow maximum usage of the available hardware.

          This represents the total amount of potential tokens within a batch.

          For `max_batch_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens.

          Overall this number should be the largest possible until the model is compute bound. Since the actual memory 
          overhead depends on the model implementation, text-embeddings-inference cannot infer this number automatically.

          [env: MAX_BATCH_TOKENS=]
          [default: 16384]

      --max-batch-requests <MAX_BATCH_REQUESTS>
          Optionally control the maximum number of individual requests in a batch

          [env: MAX_BATCH_REQUESTS=]

      --max-client-batch-size <MAX_CLIENT_BATCH_SIZE>
          Control the maximum number of inputs that a client can send in a single request

          [env: MAX_CLIENT_BATCH_SIZE=]
          [default: 32]

      --hf-api-token <HF_API_TOKEN>
          Your HuggingFace hub token

          [env: HF_API_TOKEN=]

      --hostname <HOSTNAME>
          The IP address to listen on

          [env: HOSTNAME=]
          [default: 0.0.0.0]

  -p, --port <PORT>
          The port to listen on

          [env: PORT=]
          [default: 3000]

      --uds-path <UDS_PATH>
          The name of the unix socket some text-embeddings-inference backends will use as they communicate internally 
          with gRPC

          [env: UDS_PATH=]
          [default: /tmp/text-embeddings-inference-server]

      --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk 
          for instance

          [env: HUGGINGFACE_HUB_CACHE=/data]

      --json-output
          Outputs the logs in JSON format (useful for telemetry)

          [env: JSON_OUTPUT=]

      --otlp-endpoint <OTLP_ENDPOINT>
          [env: OTLP_ENDPOINT=]

      --cors-allow-origin <CORS_ALLOW_ORIGIN>
          [env: CORS_ALLOW_ORIGIN=]

Docker Images

Text Embeddings Inference ships with multiple Docker images that you can use to target a specific backend:

Architecture Image
CPU ghcr.io/huggingface/text-embeddings-inference:cpu-0.5
Volta NOT SUPPORTED
Turing (T4, RTX 2000 series, ...) ghcr.io/huggingface/text-embeddings-inference:turing-0.5 (experimental)
Ampere 80 (A100, A30) ghcr.io/huggingface/text-embeddings-inference:0.5
Ampere 86 (A10, A40, ...) ghcr.io/huggingface/text-embeddings-inference:86-0.5
Ada Lovelace (RTX 4000 series, ...) ghcr.io/huggingface/text-embeddings-inference:89-0.5
Hopper (H100) ghcr.io/huggingface/text-embeddings-inference:hopper-0.5 (experimental)

Warning: Flash Attention is turned off by default for the Turing image as it suffers from precision issues. You can turn Flash Attention v1 ON by using the USE_FLASH_ATTENTION=True environment variable.

API documentation

You can consult the OpenAPI documentation of the text-embeddings-inference REST API using the /docs route. The Swagger UI is also available at: https://huggingface.github.io/text-embeddings-inference.

Using a private or gated model

You have the option to utilize the HUGGING_FACE_HUB_TOKEN environment variable for configuring the token employed by text-embeddings-inference. This allows you to gain access to protected resources.

For example:

  1. Go to https://huggingface.co/settings/tokens
  2. Copy your cli READ token
  3. Export HUGGING_FACE_HUB_TOKEN=<your cli READ token>

or with Docker:

model=<your private model>
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
token=<your cli READ token>

docker run --gpus all -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.5 --model-id $model

Using Re-rankers models

text-embeddings-inference v0.4.0 added support for CamemBERT, RoBERTa and XLM-RoBERTa Sequence Classification models. Re-rankers models are Sequence Classification cross-encoders models with a single class that scores the similarity between a query and a text.

See this blogpost by the LlamaIndex team to understand how you can use re-rankers models in your RAG pipeline to improve downstream performance.

model=BAAI/bge-reranker-large
revision=refs/pr/4
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.5 --model-id $model --revision $revision

And then you can rank the similarity between a query and a list of texts with:

curl 127.0.0.1:8080/rerank \
    -X POST \
    -d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
    -H 'Content-Type: application/json'

Using Sequence Classification models

You can also use classic Sequence Classification models like SamLowe/roberta-base-go_emotions:

model=SamLowe/roberta-base-go_emotions
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.5 --model-id $model 

Once you have deployed the model you can use the predict endpoint to get the emotions most associated with an input:

curl 127.0.0.1:8080/predict \
    -X POST \
    -d '{"inputs":"I like you."}' \
    -H 'Content-Type: application/json'

Distributed Tracing

text-embeddings-inference is instrumented with distributed tracing using OpenTelemetry. You can use this feature by setting the address to an OTLP collector with the --otlp-endpoint argument.

Local install

CPU

You can also opt to install text-embeddings-inference locally.

First install Rust:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Then run:

# On x86
cargo install --path router -F candle -F mkl
# On M1 or M2
cargo install --path router -F candle -F accelerate

You can now launch Text Embeddings Inference on CPU with:

model=BAAI/bge-large-en-v1.5
revision=refs/pr/5

text-embeddings-router --model-id $model --revision $revision --port 8080

Note: on some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run:

sudo apt-get install libssl-dev gcc -y

Cuda

GPUs with Cuda compute capabilities < 7.5 are not supported (V100, Titan V, GTX 1000 series, ...).

Make sure you have Cuda and the nvidia drivers installed. We recommend using NVIDIA drivers with CUDA version 12.0 or higher. You also need to add the nvidia binaries to your path:

export PATH=$PATH:/usr/local/cuda/bin

Then run:

# This can take a while as we need to compile a lot of cuda kernels

# On Turing GPUs (T4, RTX 2000 series ... )
cargo install --path router -F candle-cuda-turing --no-default-features

# On Ampere and Hopper
cargo install --path router -F candle-cuda --no-default-features

You can now launch Text Embeddings Inference on GPU with:

model=BAAI/bge-large-en-v1.5
revision=refs/pr/5

text-embeddings-router --model-id $model --revision $revision --port 8080

Docker build

You can build the CPU container with:

docker build .

To build the Cuda containers, you need to know the compute cap of the GPU you will be using at runtime.

Then you can build the container with:

# Example for Turing (T4, RTX 2000 series, ...)
runtime_compute_cap=75

# Example for A100
runtime_compute_cap=80

# Example for A10
runtime_compute_cap=86

# Example for Ada Lovelace (RTX 4000 series, ...)
runtime_compute_cap=89

# Example for H100
runtime_compute_cap=90

docker build . -f Dockerfile-cuda --build-arg CUDA_COMPUTE_CAP=$runtime_compute_cap

More Repositories

1

transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
133,705
star
2

pytorch-image-models

PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Python
28,073
star
3

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Python
25,619
star
4

datasets

🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Python
17,530
star
5

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
15,663
star
6

candle

Minimalist ML framework for Rust
Rust
15,011
star
7

trl

Train transformer language models with reinforcement learning.
Python
9,850
star
8

text-generation-inference

Large Language Model Text Generation Inference
Python
8,939
star
9

tokenizers

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Rust
8,885
star
10

accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Python
7,854
star
11

chat-ui

Open source codebase powering the HuggingChat app
TypeScript
7,113
star
12

lerobot

🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Python
6,522
star
13

alignment-handbook

Robust recipes to align language models with human and AI preferences
Python
4,474
star
14

parler-tts

Inference and training library for high-quality TTS models.
Python
4,027
star
15

autotrain-advanced

🤗 AutoTrain Advanced
Python
3,925
star
16

deep-rl-class

This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
MDX
3,680
star
17

diffusion-models-class

Materials for the Hugging Face Diffusion Models Course
Jupyter Notebook
3,508
star
18

notebooks

Notebooks using the Hugging Face libraries 🤗
Jupyter Notebook
3,492
star
19

distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Python
3,455
star
20

neuralcoref

✨Fast Coreference Resolution in spaCy with Neural Networks
C
2,842
star
21

safetensors

Simple, safe way to store and distribute tensors
Python
2,754
star
22

knockknock

🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code
Python
2,682
star
23

speech-to-speech

Speech To Speech: an effort for an open-sourced and modular GPT4-o
Python
2,540
star
24

swift-coreml-diffusers

Swift app demonstrating Core ML Stable Diffusion
Swift
2,506
star
25

optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
Python
2,469
star
26

blog

Public repo for HF blog posts
Jupyter Notebook
2,303
star
27

setfit

Efficient few-shot learning with Sentence Transformers
Jupyter Notebook
2,142
star
28

course

The Hugging Face course on Transformers
MDX
2,005
star
29

awesome-papers

Papers & presentation materials from Hugging Face's internal science day
1,996
star
30

datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Python
1,909
star
31

evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Python
1,825
star
32

cookbook

Open-source AI cookbook
Jupyter Notebook
1,660
star
33

transfer-learning-conv-ai

🦄 State-of-the-Art Conversational AI with Transfer Learning
Python
1,654
star
34

swift-coreml-transformers

Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Swift
1,543
star
35

pytorch-openai-transformer-lm

🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Python
1,464
star
36

huggingface.js

Utilities to use the Hugging Face Hub API
TypeScript
1,368
star
37

Mongoku

🔥The Web-scale GUI for MongoDB
TypeScript
1,313
star
38

huggingface_hub

All the open source things related to the Hugging Face Hub.
Python
1,311
star
39

gsplat.js

JavaScript Gaussian Splatting library.
TypeScript
1,302
star
40

llm-vscode

LLM powered development for VSCode
TypeScript
1,206
star
41

hmtl

🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP
Python
1,185
star
42

nanotron

Minimalistic large language model 3D-parallelism training
Python
1,071
star
43

pytorch-pretrained-BigGAN

🦋A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.
Python
986
star
44

optimum-nvidia

Python
888
star
45

torchMoji

😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc
Python
880
star
46

awesome-huggingface

🤗 A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
853
star
47

optimum-quanto

A pytorch quantization backend for optimum
Python
738
star
48

llm.nvim

LLM powered development for Neovim
Lua
728
star
49

naacl_transfer_learning_tutorial

Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
Python
718
star
50

dataset-viewer

Backend that powers the dataset viewer on Hugging Face dataset pages through a public API.
Python
689
star
51

swift-transformers

Swift Package to implement a transformers-like API in Swift
Swift
647
star
52

exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Python
587
star
53

llm-ls

LSP server leveraging LLMs for code completion (and more?)
Rust
586
star
54

ratchet

A cross-platform browser ML framework.
Rust
574
star
55

transformers-bloom-inference

Fast Inference Solutions for BLOOM
Python
557
star
56

lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Python
554
star
57

pytorch_block_sparse

Fast Block Sparse Matrices for Pytorch
C++
523
star
58

node-question-answering

Fast and production-ready question answering in Node.js
TypeScript
459
star
59

large_language_model_training_playbook

An open collection of implementation tips, tricks and resources for training large language models
Python
452
star
60

swift-chat

Mac app to demonstrate swift-transformers
Swift
444
star
61

llm_training_handbook

An open collection of methodologies to help with successful training of large language models.
Python
437
star
62

text-clustering

Easily embed, cluster and semantically label text datasets
Python
422
star
63

cosmopedia

Python
416
star
64

optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools
Jupyter Notebook
393
star
65

controlnet_aux

Python
386
star
66

community-events

Place where folks can contribute to 🤗 community events
Jupyter Notebook
368
star
67

tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Java
368
star
68

nn_pruning

Prune a model while finetuning or training.
Jupyter Notebook
360
star
69

speechbox

Python
341
star
70

100-times-faster-nlp

🚀100 Times Faster Natural Language Processing in Python - iPython notebook
HTML
325
star
71

education-toolkit

Educational materials for universities
Jupyter Notebook
324
star
72

transformers.js-examples

A collection of 🤗 Transformers.js demos and example applications
JavaScript
323
star
73

open-muse

Open reproduction of MUSE for fast text2image generation.
Python
320
star
74

local-gemma

Gemma 2 optimized for your local machine.
Python
317
star
75

unity-api

C#
313
star
76

audio-transformers-course

The Hugging Face Course on Transformers for Audio
MDX
308
star
77

datablations

Scaling Data-Constrained Language Models
Jupyter Notebook
305
star
78

hf_transfer

Rust
287
star
79

dataspeech

Python
262
star
80

huggingface-llama-recipes

Jupyter Notebook
259
star
81

optimum-benchmark

🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Python
245
star
82

diarizers

Python
238
star
83

hub-docs

Docs of the Hugging Face Hub
221
star
84

llm-swarm

Manage scalable open LLM inference endpoints in Slurm clusters
Python
216
star
85

sam2-studio

Swift
196
star
86

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Jupyter Notebook
193
star
87

data-is-better-together

Let's build better datasets, together!
Jupyter Notebook
192
star
88

instruction-tuned-sd

Code for instruction-tuning Stable Diffusion.
Python
189
star
89

simulate

🎢 Creating and sharing simulation environments for embodied and synthetic data research
Python
185
star
90

OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
Python
184
star
91

diffusion-fast

Faster generation with text-to-image diffusion models.
Python
179
star
92

olm-datasets

Pipeline for pulling and processing online language model pretraining data from the web
Python
173
star
93

api-inference-community

Python
161
star
94

jat

General multi-task deep RL Agent
Python
154
star
95

workshops

Materials for workshops on the Hugging Face ecosystem
Jupyter Notebook
148
star
96

coreml-examples

Swift Core ML Examples
Jupyter Notebook
147
star
97

optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Python
147
star
98

chug

Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.
Python
140
star
99

sharp-transformers

A Unity plugin for using Transformers models in Unity.
C#
139
star
100

hf-hub

Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package
Rust
132
star