• Stars
    star
    771
  • Rank 57,526 (Top 2 %)
  • Language
    TypeScript
  • License
    Other
  • Created almost 3 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Model Deployment at Scale on Kubernetes 🦄️

🦄️ Yatai: Model Deployment at Scale on Kubernetes

actions_status docs join_slack

Yatai (屋台, food cart) lets you deploy, operate and scale Machine Learning services on Kubernetes.

It supports deploying any ML models via BentoML: the unified model serving framework.

yatai-overview-page

👉 Join our Slack community today!

Looking for the fastest way to give Yatai a try? Check out BentoML Cloud to get started today.


Why Yatai?

🍱 Made for BentoML, deploy at scale

  • Scale BentoML to its full potential on a distributed system, optimized for cost saving and performance.
  • Manage deployment lifecycle to deploy, update, or rollback via API or Web UI.
  • Centralized registry providing the foundation for CI/CD via artifact management APIs, labeling, and WebHooks for custom integration.

🚅 Cloud native & DevOps friendly

  • Kubernetes-native workflow via BentoDeployment CRD (Custom Resource Definition), which can easily fit into an existing GitOps workflow.
  • Native integration with Grafana stack for observability.
  • Support for traffic control with Istio.
  • Compatible with all major cloud platforms (AWS, Azure, and GCP).

Getting Started

  • 📖 Documentation - Overview of the Yatai docs and related resources
  • ⚙️ Installation - Hands-on instruction on how to install Yatai for production use
  • 👉 Join Community Slack - Get help from our community and maintainers

Quick Tour

Let's try out Yatai locally in a minikube cluster!

⚙️ Prerequisites:

  • Install latest minikube: https://minikube.sigs.k8s.io/docs/start/
  • Install latest Helm: https://helm.sh/docs/intro/install/
  • Start a minikube Kubernetes cluster: minikube start --cpus 4 --memory 4096, if you are using macOS, you should use hyperkit driver to prevent the macOS docker desktop network limitation
  • Check that minikube cluster status is "running": minikube status
  • Make sure your kubectl is configured with minikube context: kubectl config current-context
  • Enable ingress controller: minikube addons enable ingress

🚧 Install Yatai

Install Yatai with the following script:

bash <(curl -s "https://raw.githubusercontent.com/bentoml/yatai/main/scripts/quick-install-yatai.sh")

This script will install Yatai along with its dependencies (PostgreSQL and MinIO) on your minikube cluster.

Note that this installation script is made for development and testing use only. For production deployment, check out the Installation Guide.

To access Yatai web UI, run the following command and keep the terminal open:

kubectl --namespace yatai-system port-forward svc/yatai 8080:80

In a separate terminal, run:

YATAI_INITIALIZATION_TOKEN=$(kubectl get secret yatai-env --namespace yatai-system -o jsonpath="{.data.YATAI_INITIALIZATION_TOKEN}" | base64 --decode)
echo "Open in browser: http://127.0.0.1:8080/setup?token=$YATAI_INITIALIZATION_TOKEN"

Open the URL printed above from your browser to finish admin account setup.

🍱 Push Bento to Yatai

First, get an API token and login to the BentoML CLI:

  • Keep the kubectl port-forward command in the step above running

  • Go to Yatai's API tokens page: http://127.0.0.1:8080/api_tokens

  • Create a new API token from the UI, making sure to assign "API" access under "Scopes"

  • Copy the login command upon token creation and run as a shell command, e.g.:

    bentoml yatai login --api-token {YOUR_TOKEN} --endpoint http://127.0.0.1:8080

If you don't already have a Bento built, run the following commands from the BentoML Quickstart Project to build a sample Bento:

git clone https://github.com/bentoml/bentoml.git && cd ./examples/quickstart
pip install -r ./requirements.txt
python train.py
bentoml build

Push your newly built Bento to Yatai:

bentoml push iris_classifier:latest

Now you can view and manage models and bentos from the web UI:

yatai-bento-repos

yatai-model-detail

🔧 Install yatai-image-builder component

Yatai's image builder feature comes as a separate component, you can install it via the following script:

bash <(curl -s "https://raw.githubusercontent.com/bentoml/yatai-image-builder/main/scripts/quick-install-yatai-image-builder.sh")

This will install the BentoRequest CRD(Custom Resource Definition) and Bento CRD in your cluster. Similarly, this script is made for development and testing purposes only.

🔧 Install yatai-deployment component

Yatai's Deployment feature comes as a separate component, you can install it via the following script:

bash <(curl -s "https://raw.githubusercontent.com/bentoml/yatai-deployment/main/scripts/quick-install-yatai-deployment.sh")

This will install the BentoDeployment CRD(Custom Resource Definition) in your cluster and enable the deployment UI on Yatai. Similarly, this script is made for development and testing purposes only.

🚢 Deploy Bento!

Once the yatai-deployment component was installed, Bentos pushed to Yatai can be deployed to your Kubernetes cluster and exposed via a Service endpoint.

A Bento Deployment can be created either via Web UI or via a Kubernetes CRD config:

Option 1. Simple Deployment via Web UI

yatai-deployment-creation

Option 2. Deploy with kubectl & CRD

Define your Bento deployment in a my_deployment.yaml file:

apiVersion: resources.yatai.ai/v1alpha1
kind: BentoRequest
metadata:
    name: iris-classifier
    namespace: yatai
spec:
    bentoTag: iris_classifier:3oevmqfvnkvwvuqj
---
apiVersion: serving.yatai.ai/v2alpha1
kind: BentoDeployment
metadata:
    name: my-bento-deployment
    namespace: yatai
spec:
    bento: iris-classifier
    ingress:
        enabled: true
    resources:
        limits:
            cpu: "500m"
            memory: "512m"
        requests:
            cpu: "250m"
            memory: "128m"
    autoscaling:
        maxReplicas: 10
        minReplicas: 2
    runners:
        - name: iris_clf
          resources:
              limits:
                  cpu: "1000m"
                  memory: "1Gi"
              requests:
                  cpu: "500m"
                  memory: "512m"
          autoscaling:
              maxReplicas: 4
              minReplicas: 1

Apply the deployment to your minikube cluster:

kubectl apply -f my_deployment.yaml

Now you can see the deployment process from the Yatai Web UI and find the endpoint URL for accessing the deployed Bento.

yatai-deployment-details

Community

Contributing

There are many ways to contribute to the project:

  • If you have any feedback on the project, share it with the community in GitHub Discussions under the BentoML repo.
  • Report issues you're facing and "Thumbs up" on issues and feature requests that are relevant to you.
  • Investigate bugs and review other developers' pull requests.
  • Contributing code or documentation to the project by submitting a GitHub pull request. See the development guide.

Usage Reporting

Yatai collects usage data that helps our team to improve the product. Only Yatai's internal API calls are being reported. We strip out as much potentially sensitive information as possible, and we will never collect user code, model data, model names, or stack traces. Here's the code for usage tracking. You can opt-out of usage by configuring the helm chart option doNotTrack to true.

doNotTrack: false

Or by setting the YATAI_DONOT_TRACK env var in yatai deployment.

spec:
  template:
    spec:
      containers:
        env:
        - name: YATAI_DONOT_TRACK
          value: "true"

Licence

Elastic License 2.0 (ELv2)

More Repositories

1

OpenLLM

Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint in the cloud.
Python
9,124
star
2

BentoML

The easiest way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Multi-model Inference Graph/Pipelines, LLM/RAG apps, and more!
Python
6,714
star
3

OneDiffusion

OneDiffusion: Run any Stable Diffusion models and fine-tuned weights with ease
Python
325
star
4

stable-diffusion-server

Deploy Your Own Stable Diffusion Service
Python
191
star
5

bentoctl

Fast model deployment on any cloud 🚀
Python
172
star
6

gallery

BentoML Example Projects 🎨
Python
134
star
7

OCR-as-a-Service

Turn any OCR models into online inference API endpoint 🚀 🌖
Python
47
star
8

transformers-nlp-service

Online Inference API for NLP Transformer models - summarization, text classification, sentiment analysis and more
Python
41
star
9

CLIP-API-service

CLIP as a service - Embed image and sentences, object recognition, visual reasoning, image classification and reverse image search
Jupyter Notebook
36
star
10

BentoVLLM

Self-host LLMs with vLLM and BentoML
Python
32
star
11

simple_di

Simple dependency injection framework for Python
Python
19
star
12

yatai-deployment

🚀 Launching Bento in a Kubernetes cluster
Go
16
star
13

Fraud-Detection-Model-Serving

Online model serving with Fraud Detection model trained with XGBoost on IEEE-CIS dataset
Jupyter Notebook
14
star
14

aws-sagemaker-deploy

Fast model deployment on AWS Sagemaker
Python
14
star
15

yatai-image-builder

🐳 Build OCI images for Bentos in k8s
Go
14
star
16

sentence-embedding-bento

Sentence Embedding as a Service
Jupyter Notebook
14
star
17

google-cloud-run-deploy

Fast model deployment on Google Cloud Run
Python
13
star
18

aws-lambda-deploy

Fast model deployment on AWS Lambda
Python
13
star
19

aws-ec2-deploy

Fast model deployment on AWS EC2
Python
13
star
20

IF-multi-GPUs-demo

Python
13
star
21

rag-tutorials

a series of tutorials implementing rag service with BentoML and LlamaIndex
Python
11
star
22

diffusers-examples

API serving for your diffusers models
Python
10
star
23

BentoSVD

Python
9
star
24

Pneumonia-Detection-Demo

Pneumonia Detection - Healthcare Imaging Application built with BentoML and fine-tuned Vision Transformer (ViT) model
Python
8
star
25

yatai-chart

Helm Chart for installing Yatai on Kubernetes ⎈
Mustache
7
star
26

benchmark

BentoML Performance Benchmark 🆚
Jupyter Notebook
7
star
27

plugins

the swish knife to all things bentoml.
Starlark
6
star
28

bentoctl-operator-template

Python
6
star
29

heroku-deploy

Deploy BentoML bundled models to Heroku
Python
6
star
30

BentoLMDeploy

Self-host LLMs with LMDeploy and BentoML
Python
5
star
31

bentoml-core

Rust
5
star
32

BentoControlNet

Python
4
star
33

BentoWhisperX

Python
4
star
34

google-compute-engine-deploy

HCL
4
star
35

BentoCLIP

building a CLIP application using BentoML
Python
4
star
36

BentoRAG

Tutorial: Build RAG Apps with Custom Models Served with BentoML
Python
4
star
37

quickstart

BentoML Quickstart Example
Python
4
star
38

deploy-bento-action

A GitHub Action to deploy bento to cloud
3
star
39

azure-functions-deploy

Fast model deployment on Azure Functions
Python
3
star
40

azure-container-instances-deploy

Fast model deployment on Azure container instances
Python
3
star
41

containerize-push-action

docker's build-and-push-action equivalent for bentoml
TypeScript
3
star
42

BentoSentenceTransformers

how to build a sentence embedding application using BentoML
Python
2
star
43

BentoTRTLLM

Python
2
star
44

bentoml-arize-fraud-detection-workshop

Jupyter Notebook
2
star
45

BentoSDXLTurbo

how to build an image generation application using BentoML
Python
2
star
46

yatai-schemas

Go
1
star
47

bentoctl-workshops

Python
1
star
48

llm-bench

Python
1
star
49

bentocloud-homepage-news

1
star
50

yatai-common

Go
1
star
51

BentoBLIP

how to build an image captioning application on top of a BLIP model with BentoML
Python
1
star
52

BentoYolo

BentoML service of YOLO v8
Python
1
star
53

.github

✨🍱🦄️
1
star
54

BentoBark

Python
1
star
55

BentoMLCLLM

Python
1
star
56

BentoTGI

Python
1
star
57

openllm-benchmark

Python
1
star