• Stars
    star
    4
  • Rank 3,304,323 (Top 66 %)
  • Language
    C++
  • License
    BSD 3-Clause "New...
  • Created about 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Implementation of a local in-memory cache for Triton Inference Server's TRITONCACHE API

More Repositories

1

server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Python
8,180
star
2

pytriton

PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.
Python
725
star
3

tensorrtllm_backend

The Triton TensorRT-LLM Backend
Python
692
star
4

client

Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
C++
543
star
5

tutorials

This repository contains tutorials and examples for Triton Inference Server
Python
540
star
6

python_backend

Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
C++
508
star
7

model_analyzer

Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
Python
423
star
8

fastertransformer_backend

Python
412
star
9

backend

Common source, scripts and utilities for creating Triton backends.
C++
274
star
10

model_navigator

Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.
Python
170
star
11

vllm_backend

Python
155
star
12

dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
C++
122
star
13

onnxruntime_backend

The Triton backend for the ONNX Runtime.
C++
120
star
14

pytorch_backend

The Triton backend for the PyTorch TorchScript models.
C++
113
star
15

core

The core library and APIs implementing the Triton Inference Server.
C++
101
star
16

fil_backend

FIL backend for the Triton Inference Server
Jupyter Notebook
71
star
17

common

Common source, scripts and utilities shared across all Triton repositories.
C++
61
star
18

tensorrt_backend

The Triton backend for TensorRT.
C++
58
star
19

hugectr_backend

Jupyter Notebook
50
star
20

triton_cli

Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.
Python
45
star
21

tensorflow_backend

The Triton backend for TensorFlow.
C++
42
star
22

paddlepaddle_backend

C++
32
star
23

openvino_backend

OpenVINO backend for Triton.
C++
27
star
24

developer_tools

C++
18
star
25

stateful_backend

Triton backend for managing the model state tensors automatically in sequence batcher
C++
13
star
26

redis_cache

TRITONCACHE implementation of a Redis cache
C++
11
star
27

checksum_repository_agent

The Triton repository agent that verifies model checksums.
C++
8
star
28

contrib

Community contributions to Triton that are not officially supported or maintained by the Triton project.
Python
8
star
29

third_party

Third-party source packages that are modified for use in Triton.
C
7
star
30

identity_backend

Example Triton backend that demonstrates most of the Triton Backend API.
C++
6
star
31

repeat_backend

An example Triton backend that demonstrates sending zero, one, or multiple responses for each request.
C++
5
star
32

square_backend

Simple Triton backend used for testing.
C++
2
star