• Stars
    star
    9,813
  • Rank 3,597 (Top 0.08 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Run any open-source LLMs, such as Llama 3.1, Gemma, as OpenAI compatible API endpoint in the cloud.

Banner for OpenLLM

๐Ÿฆพ OpenLLM

pypi_status ci Twitter Discord
python_version Hatch
Ruff

An open platform for operating large language models (LLMs) in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.

๐Ÿ“– Introduction

With OpenLLM, you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps.

๐Ÿš‚ State-of-the-art LLMs: built-in supports a wide range of open-source LLMs and model runtime, including Llama 2๏ผŒStableLM, Falcon, Dolly, Flan-T5, ChatGLM, StarCoder and more.

๐Ÿ”ฅ Flexible APIs: serve LLMs over RESTful API or gRPC with one command, query via WebUI, CLI, our Python/Javascript client, or any HTTP client.

โ›“๏ธ Freedom To Build: First-class support for LangChain, BentoML and Hugging Face that allows you to easily create your own AI apps by composing LLMs with other models and services.

๐ŸŽฏ Streamline Deployment: Automatically generate your LLM server Docker Images or deploy as serverless endpoint via โ˜๏ธ BentoCloud.

๐Ÿค–๏ธ Bring your own LLM: Fine-tune any LLM to suit your needs with LLM.tuning(). (Coming soon)

Gif showing OpenLLM Intro


๐Ÿƒ Getting Started

To use OpenLLM, you need to have Python 3.8 (or newer) and pip installed on your system. We highly recommend using a Virtual Environment to prevent package conflicts.

You can install OpenLLM using pip as follows:

pip install openllm

To verify if it's installed correctly, run:

$ openllm -h

Usage: openllm [OPTIONS] COMMAND [ARGS]...

   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ•—   โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•—     โ–ˆโ–ˆโ•—     โ–ˆโ–ˆโ–ˆโ•—   โ–ˆโ–ˆโ–ˆโ•—
  โ–ˆโ–ˆโ•”โ•โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ–ˆโ–ˆโ•—  โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ•‘
  โ–ˆโ–ˆโ•‘   โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—  โ–ˆโ–ˆโ•”โ–ˆโ–ˆโ•— โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•”โ–ˆโ–ˆโ–ˆโ–ˆโ•”โ–ˆโ–ˆโ•‘
  โ–ˆโ–ˆโ•‘   โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ•โ• โ–ˆโ–ˆโ•”โ•โ•โ•  โ–ˆโ–ˆโ•‘โ•šโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•‘โ•šโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ•‘
  โ•šโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘ โ•šโ–ˆโ–ˆโ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•‘ โ•šโ•โ• โ–ˆโ–ˆโ•‘
   โ•šโ•โ•โ•โ•โ•โ• โ•šโ•โ•     โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ•  โ•šโ•โ•โ•โ•โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ•โ•โ•โ•โ•โ•โ•šโ•โ•     โ•šโ•โ•

  An open platform for operating large language models in production.
  Fine-tune, serve, deploy, and monitor any LLMs with ease.

Starting an LLM Server

To start an LLM server, use openllm start. For example, to start a OPT server, do the following:

openllm start opt

Following this, a Web UI will be accessible at http://localhost:3000 where you can experiment with the endpoints and sample input prompts.

OpenLLM provides a built-in Python client, allowing you to interact with the model. In a different terminal window or a Jupyter Notebook, create a client to start interacting with the model:

import openllm
client = openllm.client.HTTPClient('http://localhost:3000')
client.query('Explain to me the difference between "further" and "farther"')

You can also use the openllm query command to query the model from the terminal:

export OPENLLM_ENDPOINT=http://localhost:3000
openllm query 'Explain to me the difference between "further" and "farther"'

Visit http://localhost:3000/docs.json for OpenLLM's API specification.

OpenLLM seamlessly supports many models and their variants. Users can also specify different variants of the model to be served, by providing the --model-id argument, e.g.:

openllm start flan-t5 --model-id google/flan-t5-large

Note that openllm also supports all variants of fine-tuning weights, custom model path as well as quantized weights for any of the supported models as long as it can be loaded with the model architecture. Refer to supported models section for models' architecture.

Use the openllm models command to see the list of models and their variants supported in OpenLLM.

๐Ÿงฉ Supported Models

The following models are currently supported in OpenLLM. By default, OpenLLM doesn't include dependencies to run all models. The extra model-specific dependencies can be installed with the instructions below:

Model Architecture Model Ids Installation
chatglm ChatGLMForConditionalGeneration
pip install "openllm[chatglm]"
dolly-v2 GPTNeoXForCausalLM
pip install openllm
falcon FalconForCausalLM
pip install "openllm[falcon]"
flan-t5 T5ForConditionalGeneration
pip install "openllm[flan-t5]"
gpt-neox GPTNeoXForCausalLM
pip install openllm
llama LlamaForCausalLM
pip install "openllm[llama]"
mpt MPTForCausalLM
pip install "openllm[mpt]"
opt OPTForCausalLM
pip install "openllm[opt]"
stablelm GPTNeoXForCausalLM
pip install openllm
starcoder GPTBigCodeForCausalLM
pip install "openllm[starcoder]"
baichuan BaiChuanForCausalLM
pip install "openllm[baichuan]"

Runtime Implementations (Experimental)

Different LLMs may have multiple runtime implementations. For instance, they might use Pytorch (pt), Tensorflow (tf), or Flax (flax).

If you wish to specify a particular runtime for a model, you can do so by setting the OPENLLM_{MODEL_NAME}_FRAMEWORK={runtime} environment variable before running openllm start.

For example, if you want to use the Tensorflow (tf) implementation for the flan-t5 model, you can use the following command:

OPENLLM_FLAN_T5_FRAMEWORK=tf openllm start flan-t5

Note For GPU support on Flax, refers to Jax's installation to make sure that you have Jax support for the corresponding CUDA version.

Quantisation

OpenLLM supports quantisation with bitsandbytes and GPTQ

openllm start mpt --quantize int8

To run inference with gptq, simply pass --quantize gptq:

openllm start falcon --model-id TheBloke/falcon-40b-instruct-GPTQ --quantize gptq --device 0

Note: to run GPTQ, make sure to install with pip install "openllm[gptq]". The weights of all supported models should be quantized before serving. See GPTQ-for-LLaMa for more information on GPTQ quantisation.

Fine-tuning support (Experimental)

One can serve OpenLLM models with any PEFT-compatible layers with --adapter-id:

openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6-7b-quotes

It also supports adapters from custom paths:

openllm start opt --model-id facebook/opt-6.7b --adapter-id /path/to/adapters

To use multiple adapters, use the following format:

openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6.7b-lora --adapter-id aarnphm/opt-6.7b-lora:french_lora

By default, the first adapter-id will be the default Lora layer, but optionally users can change what Lora layer to use for inference via /v1/adapters:

curl -X POST http://localhost:3000/v1/adapters --json '{"adapter_name": "vn_lora"}'

Note that for multiple adapter-name and adapter-id, it is recommended to update to use the default adapter before sending the inference, to avoid any performance degradation

To include this into the Bento, one can also provide a --adapter-id into openllm build:

openllm build opt --model-id facebook/opt-6.7b --adapter-id ...

Note: We will gradually roll out support for fine-tuning all models. The following models contain fine-tuning support: OPT, Falcon, LlaMA.

Integrating a New Model

OpenLLM encourages contributions by welcoming users to incorporate their custom LLMs into the ecosystem. Check out Adding a New Model Guide to see how you can do it yourself.

Embeddings

OpenLLM tentatively provides embeddings endpoint for supported models. This can be accessed via /v1/embeddings.

To use via CLI, simply call openllm embed:

openllm embed --endpoint http://localhost:3000 "I like to eat apples" -o json
{
  "embeddings": [
    0.006569798570126295,
    -0.031249752268195152,
    -0.008072729222476482,
    0.00847396720200777,
    -0.005293501541018486,
    ...<many embeddings>...
    -0.002078012563288212,
    -0.00676426338031888,
    -0.002022686880081892
  ],
  "num_tokens": 9
}

To invoke this endpoints, use client.embed from the Python SDK:

import openllm

client = openllm.client.HTTPClient("http://localhost:3000")

client.embed("I like to eat apples")

Note: Currently, the following model framily supports embeddings: Llama, T5 (Flan-T5, FastChat, etc.), ChatGLM

โš™๏ธ Integrations

OpenLLM is not just a standalone product; it's a building block designed to integrate with other powerful tools easily. We currently offer integration with BentoML, LangChain, and Transformers Agents.

BentoML

OpenLLM models can be integrated as a Runner in your BentoML service. These runners have a generate method that takes a string as a prompt and returns a corresponding output string. This will allow you to plug and play any OpenLLM models with your existing ML workflow.

import bentoml
import openllm

model = "opt"

llm_config = openllm.AutoConfig.for_model(model)
llm_runner = openllm.Runner(model, llm_config=llm_config)

svc = bentoml.Service(
    name=f"llm-opt-service", runners=[llm_runner]
)

@svc.api(input=Text(), output=Text())
async def prompt(input_text: str) -> str:
    answer = await llm_runner.generate(input_text)
    return answer

LangChain

To quickly start a local LLM with langchain, simply do the following:

from langchain.llms import OpenLLM

llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b', device_map='auto')

llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")

langchain.llms.OpenLLM has the capability to interact with remote OpenLLM Server. Given there is an OpenLLM server deployed elsewhere, you can connect to it by specifying its URL:

from langchain.llms import OpenLLM

llm = OpenLLM(server_url='http://44.23.123.1:3000', server_type='grpc')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")

To integrate a LangChain agent with BentoML, you can do the following:

llm = OpenLLM(
    model_name='flan-t5',
    model_id='google/flan-t5-large',
    embedded=False,
)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(
    tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
svc = bentoml.Service("langchain-openllm", runners=[llm.runner])
@svc.api(input=Text(), output=Text())
def chat(input_text: str):
    return agent.run(input_text)

Note You can find out more examples under the examples folder.

Transformers Agents

OpenLLM seamlessly integrates with Transformers Agents.

Warning The Transformers Agent is still at an experimental stage. It is recommended to install OpenLLM with pip install -r nightly-requirements.txt to get the latest API update for HuggingFace agent.

import transformers

agent = transformers.HfAgent("http://localhost:3000/hf/agent")  # URL that runs the OpenLLM server

agent.run("Is the following `text` positive or negative?", text="I don't like how this models is generate inputs")

Note Only starcoder is currently supported with Agent integration. The example above was also ran with four T4s on EC2 g4dn.12xlarge

If you want to use OpenLLM client to ask questions to the running agent, you can also do so:

import openllm

client = openllm.client.HTTPClient("http://localhost:3000")

client.ask_agent(
    task="Is the following `text` positive or negative?",
    text="What are you thinking about?",
)

Gif showing Agent integration

๐Ÿš€ Deploying to Production

There are several ways to deploy your LLMs:

๐Ÿณ Docker container

  1. Building a Bento: With OpenLLM, you can easily build a Bento for a specific model, like dolly-v2, using the build command.:

    openllm build dolly-v2

    A Bento, in BentoML, is the unit of distribution. It packages your program's source code, models, files, artefacts, and dependencies.

  2. Containerize your Bento

    bentoml containerize <name:version>

    This generates a OCI-compatible docker image that can be deployed anywhere docker runs. For best scalability and reliability of your LLM service in production, we recommend deploy with BentoCloudใ€‚

โ˜๏ธ BentoCloud

Deploy OpenLLM with BentoCloud, the the serverless cloud for shipping and scaling AI applications.

  1. Create a BentoCloud account: sign up here for early access

  2. Log into your BentoCloud account:

    bentoml cloud login --api-token <your-api-token> --endpoint <bento-cloud-endpoint>

Note: Replace <your-api-token> and <bento-cloud-endpoint> with your specific API token and the BentoCloud endpoint respectively.

  1. Bulding a Bento: With OpenLLM, you can easily build a Bento for a specific model, such as dolly-v2:

    openllm build dolly-v2
  2. Pushing a Bento: Push your freshly-built Bento service to BentoCloud via the push command:

    bentoml push <name:version>
  3. Deploying a Bento: Deploy your LLMs to BentoCloud with a single bentoml deployment create command following the deployment instructions.

๐Ÿ‘ฅ Community

Engage with like-minded individuals passionate about LLMs, AI, and more on our Discord!

OpenLLM is actively maintained by the BentoML team. Feel free to reach out and join us in our pursuit to make LLMs more accessible and easy to use ๐Ÿ‘‰ Join our Slack community!

๐ŸŽ Contributing

We welcome contributions! If you're interested in enhancing OpenLLM's capabilities or have any questions, don't hesitate to reach out in our discord channel.

Checkout our Developer Guide if you wish to contribute to OpenLLM's codebase.

๐Ÿ‡ Telemetry

OpenLLM collects usage data to enhance user experience and improve the product. We only report OpenLLM's internal API calls and ensure maximum privacy by excluding sensitive information. We will never collect user code, model data, or stack traces. For usage tracking, check out the code.

You can opt out of usage tracking by using the --do-not-track CLI option:

openllm [command] --do-not-track

Or by setting the environment variable OPENLLM_DO_NOT_TRACK=True:

export OPENLLM_DO_NOT_TRACK=True

๐Ÿ“” Citation

If you use OpenLLM in your research, we provide a citation to use:

@software{Pham_OpenLLM_Operating_LLMs_2023,
author = {Pham, Aaron and Yang, Chaoyu and Sheng, Sean and  Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},
license = {Apache-2.0},
month = jun,
title = {{OpenLLM: Operating LLMs in production}},
url = {https://github.com/bentoml/OpenLLM},
year = {2023}
}

More Repositories

1

BentoML

The easiest way to serve AI apps and models - Build reliable Inference APIs, LLM apps, Multi-model chains, RAG service, and much more!
Python
7,025
star
2

Yatai

Model Deployment at Scale on Kubernetes ๐Ÿฆ„๏ธ
TypeScript
788
star
3

BentoDiffusion

BentoDiffusion: A collection of diffusion models served with BentoML
Python
331
star
4

stable-diffusion-server

Deploy Your Own Stable Diffusion Service
Python
196
star
5

bentoctl

Fast model deployment on any cloud ๐Ÿš€
Python
175
star
6

gallery

BentoML Example Projects ๐ŸŽจ
Python
134
star
7

BentoVLLM

Self-host LLMs with vLLM and BentoML
Python
64
star
8

OCR-as-a-Service

Turn any OCR models into online inference API endpoint ๐Ÿš€ ๐ŸŒ–
Python
49
star
9

CLIP-API-service

CLIP as a service - Embed image and sentences, object recognition, visual reasoning, image classification and reverse image search
Jupyter Notebook
48
star
10

transformers-nlp-service

Online Inference API for NLP Transformer models - summarization, text classification, sentiment analysis and more
Python
43
star
11

llm-bench

Python
28
star
12

rag-tutorials

a series of tutorials implementing rag service with BentoML and LlamaIndex
Python
23
star
13

simple_di

Simple dependency injection framework for Python
Python
21
star
14

BentoChatTTS

Python
21
star
15

Fraud-Detection-Model-Serving

Online model serving with Fraud Detection model trained with XGBoost on IEEE-CIS dataset
Jupyter Notebook
16
star
16

yatai-deployment

๐Ÿš€ Launching Bento in a Kubernetes cluster
Go
16
star
17

google-cloud-run-deploy

Fast model deployment on Google Cloud Run
Python
15
star
18

aws-sagemaker-deploy

Fast model deployment on AWS Sagemaker
Python
15
star
19

aws-lambda-deploy

Fast model deployment on AWS Lambda
Python
14
star
20

aws-ec2-deploy

Fast model deployment on AWS EC2
Python
14
star
21

BentoLMDeploy

Self-host LLMs with LMDeploy and BentoML
Python
14
star
22

yatai-image-builder

๐Ÿณ Build OCI images for Bentos in k8s
Go
14
star
23

sentence-embedding-bento

Sentence Embedding as a Service
Jupyter Notebook
14
star
24

IF-multi-GPUs-demo

Python
12
star
25

openllm-models

Python
10
star
26

BentoSVD

Python
10
star
27

BentoWhisperX

Python
10
star
28

diffusers-examples

API serving for your diffusers models
Python
10
star
29

BentoCLIP

building a CLIP application using BentoML
Python
8
star
30

Pneumonia-Detection-Demo

Pneumonia Detection - Healthcare Imaging Application built with BentoML and fine-tuned Vision Transformer (ViT) model
Python
8
star
31

yatai-chart

Helm Chart for installing Yatai on Kubernetes โŽˆ
Mustache
7
star
32

benchmark

BentoML Performance Benchmark ๐Ÿ†š
Jupyter Notebook
7
star
33

BentoTRTLLM

Python
6
star
34

plugins

the swish knife to all things bentoml.
Starlark
6
star
35

bentoctl-operator-template

Python
6
star
36

heroku-deploy

Deploy BentoML bundled models to Heroku
Python
6
star
37

quickstart

BentoML Quickstart Example
Python
6
star
38

BentoSentenceTransformers

how to build a sentence embedding application using BentoML
Python
5
star
39

BentoYolo

BentoML service of YOLO v8
Python
5
star
40

google-compute-engine-deploy

HCL
5
star
41

bentoml-core

Rust
5
star
42

BentoControlNet

Python
4
star
43

BentoBark

Python
4
star
44

BentoRAG

Tutorial: Build RAG Apps with Custom Models Served with BentoML
Python
4
star
45

BentoXTTS

how to build an text-to-speech application using BentoML
Python
4
star
46

containerize-push-action

docker's build-and-push-action equivalent for bentoml
TypeScript
4
star
47

BentoBLIP

how to build an image captioning application on top of a BLIP model with BentoML
Python
3
star
48

deploy-bento-action

A GitHub Action to deploy bento to cloud
3
star
49

azure-functions-deploy

Fast model deployment on Azure Functions
Python
3
star
50

azure-container-instances-deploy

Fast model deployment on Azure container instances
Python
3
star
51

BentoFunctionCalling

Python
3
star
52

llm-router

LLM Router Demo
Python
3
star
53

BentoResnet

Python
2
star
54

bentoml-arize-fraud-detection-workshop

Jupyter Notebook
2
star
55

BentoSDXLTurbo

how to build an image generation application using BentoML
Python
2
star
56

BentoSearch

Search with LLM
Python
2
star
57

BentoInfinity

Python
2
star
58

BentoMLCLLM

Python
2
star
59

yatai-schemas

Go
1
star
60

bentoctl-workshops

Python
1
star
61

bentocloud-homepage-news

1
star
62

yatai-common

Go
1
star
63

BentoMoirai

Python
1
star
64

.github

โœจ๐Ÿฑ๐Ÿฆ„๏ธ
1
star
65

bentoml-unsloth

BentoML Unsloth integration
Python
1
star
66

BentoShield

Python
1
star
67

LLMGateway

Python
1
star
68

BentoTGI

Python
1
star
69

openllm-benchmark

Python
1
star