• Stars
    star
    403
  • Rank 103,283 (Top 3 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 1 year ago
  • Updated 12 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This repository contains tutorials and examples for Triton Inference Server

Triton Tutorials

For users experiencing the "Tensor in" & "Tensor out" approach to Deep Learning Inference, getting started with Triton can lead to many questions. The goal of this repository is to familiarize users with Triton's features and provide guides and examples to ease migration. For a feature by feature explanation, refer to the Triton Inference Server documentation.

Getting Started Checklist

Overview Video Conceptual Guide: Deploying Models

Quick Deploy

The focus of these examples is to demonstrate deployment for models trained with various frameworks. These are quick demonstrations made with an understanding that the user is somewhat familiar with Triton.

Deploy a ...

PyTorch Model TensorFlow Model ONNX Model TensorRT Accelerated Model

What does this repository contain?

This repository contains the following resources:

  • Conceptual Guide: This guide focuses on building a conceptual understanding of the general challenges faced whilst building inference infrastructure and how to best tackle these challenges with Triton Inference Server.
  • Quick Deploy: These are a set of guides about deploying a model from your preferred framework to the Triton Inference Server. These guides assume a basic understanding of the Triton Inference Server. It is recommended to review the getting started material for a complete understanding.
  • HuggingFace Guide: The focus of this guide is to walk the user through different methods in which a HuggingFace model can be deployed using the Triton Inference Server.
  • Feature Guides: This folder is meant to house Triton's feature-specific examples.
  • Migration Guide: Migrating from an existing solution to Triton Inference Server? Get an understanding of the general architecture that might best fit your use case.

Navigating Triton Inference Server Resources

The Triton Inference Server GitHub organization contains multiple repositories housing different features of the Triton Inference Server. The following is not a complete description of all the repositories, but just a simple guide to build intuitive understanding.

  • Server is the main Triton Inference Server Repository.
  • Client contains the libraries and examples needed to create Triton Clients
  • Backend contains the core scripts and utilities to build a new Triton Backend. Any repository containing the word "backend" is either a framework backend or an example for how to create a backend.
  • Tools like Model Analyzer and Model Navigator provide the tooling to either measure performance, or to simplify model acceleration.

Adding Requests

Open an issue and specify details for adding a request for an example. Want to make a contribution? Open a pull request and tag an Admin.

More Repositories

1

server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Python
7,321
star
2

pytriton

PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.
Python
661
star
3

client

Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
C++
451
star
4

python_backend

Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
C++
444
star
5

tensorrtllm_backend

The Triton TensorRT-LLM Backend
Python
439
star
6

fastertransformer_backend

Python
409
star
7

model_analyzer

Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
Python
374
star
8

backend

Common source, scripts and utilities for creating Triton backends.
C++
231
star
9

model_navigator

Triton Model Navigator is a tool that provides the ability to automate the process of model deployment on the Triton Inference Server.
Python
148
star
10

dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
C++
116
star
11

onnxruntime_backend

The Triton backend for the ONNX Runtime.
C++
109
star
12

pytorch_backend

The Triton backend for the PyTorch TorchScript models.
C++
93
star
13

vllm_backend

Python
84
star
14

core

The core library and APIs implementing the Triton Inference Server.
C++
78
star
15

fil_backend

FIL backend for the Triton Inference Server
Jupyter Notebook
63
star
16

common

Common source, scripts and utilities shared across all Triton repositories.
C++
53
star
17

hugectr_backend

Jupyter Notebook
48
star
18

tensorrt_backend

The Triton backend for TensorRT.
C++
40
star
19

tensorflow_backend

The Triton backend for TensorFlow.
C++
39
star
20

paddlepaddle_backend

C++
32
star
21

openvino_backend

OpenVINO backend for Triton.
C++
22
star
22

developer_tools

C++
15
star
23

stateful_backend

Triton backend for managing the model state tensors automatically in sequence batcher
C++
10
star
24

contrib

Community contributions to Triton that are not officially supported or maintained by the Triton project.
Python
8
star
25

third_party

Third-party source packages that are modified for use in Triton.
C
7
star
26

checksum_repository_agent

The Triton repository agent that verifies model checksums.
C++
6
star
27

identity_backend

Example Triton backend that demonstrates most of the Triton Backend API.
C++
6
star
28

redis_cache

TRITONCACHE implementation of a Redis cache
C++
5
star
29

repeat_backend

An example Triton backend that demonstrates sending zero, one, or multiple responses for each request.
C++
5
star
30

local_cache

Implementation of a local in-memory cache for Triton Inference Server's TRITONCACHE API
C++
2
star
31

square_backend

Simple Triton backend used for testing.
C++
2
star