๐ฆพ OpenLLM
An open platform for operating large language models (LLMs) in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.
๐ Introduction
With OpenLLM, you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps.
๐ State-of-the-art LLMs: built-in supports a wide range of open-source LLMs and model runtime, including Llama 2๏ผStableLM, Falcon, Dolly, Flan-T5, ChatGLM, StarCoder and more.
โ๏ธ Freedom To Build: First-class support for LangChain, BentoML and Hugging Face that allows you to easily create your own AI apps by composing LLMs with other models and services.
LLM.tuning()
. (Coming soon)
๐ Getting Started
To use OpenLLM, you need to have Python 3.8 (or newer) and pip
installed on
your system. We highly recommend using a Virtual Environment to prevent package
conflicts.
You can install OpenLLM using pip as follows:
pip install openllm
To verify if it's installed correctly, run:
$ openllm -h
Usage: openllm [OPTIONS] COMMAND [ARGS]...
โโโโโโโ โโโโโโโ โโโโโโโโโโโโ โโโโโโ โโโ โโโโ โโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโ โโโ โโโโโ โโโโโ
โโโ โโโโโโโโโโโโโโโโโ โโโโโโ โโโโโโ โโโ โโโโโโโโโโโ
โโโ โโโโโโโโโโ โโโโโโ โโโโโโโโโโโโโ โโโ โโโโโโโโโโโ
โโโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโ โโโ โโโ
โโโโโโโ โโโ โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโ โโโ
An open platform for operating large language models in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.
Starting an LLM Server
To start an LLM server, use openllm start
. For example, to start a
OPT
server, do the
following:
openllm start opt
Following this, a Web UI will be accessible at http://localhost:3000 where you can experiment with the endpoints and sample input prompts.
OpenLLM provides a built-in Python client, allowing you to interact with the model. In a different terminal window or a Jupyter Notebook, create a client to start interacting with the model:
import openllm
client = openllm.client.HTTPClient('http://localhost:3000')
client.query('Explain to me the difference between "further" and "farther"')
You can also use the openllm query
command to query the model from the
terminal:
export OPENLLM_ENDPOINT=http://localhost:3000
openllm query 'Explain to me the difference between "further" and "farther"'
Visit http://localhost:3000/docs.json
for OpenLLM's API specification.
OpenLLM seamlessly supports many models and their variants.
Users can also specify different variants of the model to be served, by
providing the --model-id
argument, e.g.:
openllm start flan-t5 --model-id google/flan-t5-large
Note that
openllm
also supports all variants of fine-tuning weights, custom model path as well as quantized weights for any of the supported models as long as it can be loaded with the model architecture. Refer to supported models section for models' architecture.
Use the openllm models
command to see the list of models and their variants
supported in OpenLLM.
๐งฉ Supported Models
The following models are currently supported in OpenLLM. By default, OpenLLM doesn't include dependencies to run all models. The extra model-specific dependencies can be installed with the instructions below:
Runtime Implementations (Experimental)
Different LLMs may have multiple runtime implementations. For instance, they
might use Pytorch (pt
), Tensorflow (tf
), or Flax (flax
).
If you wish to specify a particular runtime for a model, you can do so by
setting the OPENLLM_{MODEL_NAME}_FRAMEWORK={runtime}
environment variable
before running openllm start
.
For example, if you want to use the Tensorflow (tf
) implementation for the
flan-t5
model, you can use the following command:
OPENLLM_FLAN_T5_FRAMEWORK=tf openllm start flan-t5
Note For GPU support on Flax, refers to Jax's installation to make sure that you have Jax support for the corresponding CUDA version.
Quantisation
OpenLLM supports quantisation with bitsandbytes and GPTQ
openllm start mpt --quantize int8
To run inference with gptq
, simply pass --quantize gptq
:
openllm start falcon --model-id TheBloke/falcon-40b-instruct-GPTQ --quantize gptq --device 0
Note: to run GPTQ, make sure to install with
pip install "openllm[gptq]"
. The weights of all supported models should be quantized before serving. See GPTQ-for-LLaMa for more information on GPTQ quantisation.
Fine-tuning support (Experimental)
One can serve OpenLLM models with any PEFT-compatible layers with
--adapter-id
:
openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6-7b-quotes
It also supports adapters from custom paths:
openllm start opt --model-id facebook/opt-6.7b --adapter-id /path/to/adapters
To use multiple adapters, use the following format:
openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6.7b-lora --adapter-id aarnphm/opt-6.7b-lora:french_lora
By default, the first adapter-id will be the default Lora layer, but optionally
users can change what Lora layer to use for inference via /v1/adapters
:
curl -X POST http://localhost:3000/v1/adapters --json '{"adapter_name": "vn_lora"}'
Note that for multiple adapter-name and adapter-id, it is recommended to update to use the default adapter before sending the inference, to avoid any performance degradation
To include this into the Bento, one can also provide a --adapter-id
into
openllm build
:
openllm build opt --model-id facebook/opt-6.7b --adapter-id ...
Note: We will gradually roll out support for fine-tuning all models. The following models contain fine-tuning support: OPT, Falcon, LlaMA.
Integrating a New Model
OpenLLM encourages contributions by welcoming users to incorporate their custom LLMs into the ecosystem. Check out Adding a New Model Guide to see how you can do it yourself.
Embeddings
OpenLLM tentatively provides embeddings endpoint for supported models.
This can be accessed via /v1/embeddings
.
To use via CLI, simply call openllm embed
:
openllm embed --endpoint http://localhost:3000 "I like to eat apples" -o json
{
"embeddings": [
0.006569798570126295,
-0.031249752268195152,
-0.008072729222476482,
0.00847396720200777,
-0.005293501541018486,
...<many embeddings>...
-0.002078012563288212,
-0.00676426338031888,
-0.002022686880081892
],
"num_tokens": 9
}
To invoke this endpoints, use client.embed
from the Python SDK:
import openllm
client = openllm.client.HTTPClient("http://localhost:3000")
client.embed("I like to eat apples")
Note: Currently, the following model framily supports embeddings: Llama, T5 (Flan-T5, FastChat, etc.), ChatGLM
โ๏ธ Integrations
OpenLLM is not just a standalone product; it's a building block designed to integrate with other powerful tools easily. We currently offer integration with BentoML, LangChain, and Transformers Agents.
BentoML
OpenLLM models can be integrated as a
Runner in your
BentoML service. These runners have a generate
method that takes a string as a
prompt and returns a corresponding output string. This will allow you to plug
and play any OpenLLM models with your existing ML workflow.
import bentoml
import openllm
model = "opt"
llm_config = openllm.AutoConfig.for_model(model)
llm_runner = openllm.Runner(model, llm_config=llm_config)
svc = bentoml.Service(
name=f"llm-opt-service", runners=[llm_runner]
)
@svc.api(input=Text(), output=Text())
async def prompt(input_text: str) -> str:
answer = await llm_runner.generate(input_text)
return answer
LangChain
To quickly start a local LLM with langchain
, simply do the following:
from langchain.llms import OpenLLM
llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b', device_map='auto')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
langchain.llms.OpenLLM
has the capability to interact with remote OpenLLM
Server. Given there is an OpenLLM server deployed elsewhere, you can connect to
it by specifying its URL:
from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://44.23.123.1:3000', server_type='grpc')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
To integrate a LangChain agent with BentoML, you can do the following:
llm = OpenLLM(
model_name='flan-t5',
model_id='google/flan-t5-large',
embedded=False,
)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
svc = bentoml.Service("langchain-openllm", runners=[llm.runner])
@svc.api(input=Text(), output=Text())
def chat(input_text: str):
return agent.run(input_text)
Note You can find out more examples under the examples folder.
Transformers Agents
OpenLLM seamlessly integrates with Transformers Agents.
Warning The Transformers Agent is still at an experimental stage. It is recommended to install OpenLLM with
pip install -r nightly-requirements.txt
to get the latest API update for HuggingFace agent.
import transformers
agent = transformers.HfAgent("http://localhost:3000/hf/agent") # URL that runs the OpenLLM server
agent.run("Is the following `text` positive or negative?", text="I don't like how this models is generate inputs")
Note Only
starcoder
is currently supported with Agent integration. The example above was also ran with four T4s on EC2g4dn.12xlarge
If you want to use OpenLLM client to ask questions to the running agent, you can also do so:
import openllm
client = openllm.client.HTTPClient("http://localhost:3000")
client.ask_agent(
task="Is the following `text` positive or negative?",
text="What are you thinking about?",
)
๐ Deploying to Production
There are several ways to deploy your LLMs:
๐ณ Docker container
-
Building a Bento: With OpenLLM, you can easily build a Bento for a specific model, like
dolly-v2
, using thebuild
command.:openllm build dolly-v2
A Bento, in BentoML, is the unit of distribution. It packages your program's source code, models, files, artefacts, and dependencies.
-
Containerize your Bento
bentoml containerize <name:version>
This generates a OCI-compatible docker image that can be deployed anywhere docker runs. For best scalability and reliability of your LLM service in production, we recommend deploy with BentoCloudใ
โ๏ธ BentoCloud
Deploy OpenLLM with BentoCloud, the the serverless cloud for shipping and scaling AI applications.
-
Create a BentoCloud account: sign up here for early access
-
Log into your BentoCloud account:
bentoml cloud login --api-token <your-api-token> --endpoint <bento-cloud-endpoint>
Note: Replace
<your-api-token>
and<bento-cloud-endpoint>
with your specific API token and the BentoCloud endpoint respectively.
-
Bulding a Bento: With OpenLLM, you can easily build a Bento for a specific model, such as
dolly-v2
:openllm build dolly-v2
-
Pushing a Bento: Push your freshly-built Bento service to BentoCloud via the
push
command:bentoml push <name:version>
-
Deploying a Bento: Deploy your LLMs to BentoCloud with a single
bentoml deployment create
command following the deployment instructions.
๐ฅ Community
Engage with like-minded individuals passionate about LLMs, AI, and more on our Discord!
OpenLLM is actively maintained by the BentoML team. Feel free to reach out and
join us in our pursuit to make LLMs more accessible and easy to use
๐ Contributing
We welcome contributions! If you're interested in enhancing OpenLLM's capabilities or have any questions, don't hesitate to reach out in our discord channel.
Checkout our Developer Guide if you wish to contribute to OpenLLM's codebase.
๐ Telemetry
OpenLLM collects usage data to enhance user experience and improve the product. We only report OpenLLM's internal API calls and ensure maximum privacy by excluding sensitive information. We will never collect user code, model data, or stack traces. For usage tracking, check out the code.
You can opt out of usage tracking by using the --do-not-track
CLI option:
openllm [command] --do-not-track
Or by setting the environment variable OPENLLM_DO_NOT_TRACK=True
:
export OPENLLM_DO_NOT_TRACK=True
๐ Citation
If you use OpenLLM in your research, we provide a citation to use:
@software{Pham_OpenLLM_Operating_LLMs_2023,
author = {Pham, Aaron and Yang, Chaoyu and Sheng, Sean and Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},
license = {Apache-2.0},
month = jun,
title = {{OpenLLM: Operating LLMs in production}},
url = {https://github.com/bentoml/OpenLLM},
year = {2023}
}