• Stars
    star
    2,304
  • Rank 19,990 (Top 0.4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
LightLLM

docs Docker stars Discord Banner license

LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance. LightLLM harnesses the strengths of numerous well-regarded open-source implementations, including but not limited to FasterTransformer, TGI, vLLM, and FlashAttention.

Features

  • Tri-process asynchronous collaboration: tokenization, model inference, and detokenization are performed asynchronously, leading to a considerable improvement in GPU utilization.
  • Nopad (Unpad): offers support for nopad attention operations across multiple models to efficiently handle requests with large length disparities.
  • Dynamic Batch: enables dynamic batch scheduling of requests
  • FlashAttention: incorporates FlashAttention to improve speed and reduce GPU memory footprint during inference.
  • Tensor Parallelism: utilizes tensor parallelism over multiple GPUs for faster inference.
  • Token Attention: implements token-wise's KV cache memory management mechanism, allowing for zero memory waste during inference.
  • High-performance Router: collaborates with Token Attention to meticulously manage the GPU memory of each token, thereby optimizing system throughput.
  • Int8KV Cache: This feature will increase the capacity of tokens to almost twice as much. only llama support.

Supported Model List

When you start Qwen-7b, you need to set the parameter '--eos_id 151643 --trust_remote_code'.

ChatGLM2 needs to set the parameter '--trust_remote_code'.

Baichuan needs to set the parameter '--trust_remote_code'.

InternLM needs to set the parameter '--trust_remote_code'.

Get started

Requirements

The code has been tested with Pytorch>=1.3, CUDA 11.8, and Python 3.9. To install the necessary dependencies, please refer to the provided requirements.txt and follow the instructions as

pip install -r requirements.txt

Container

You can use the official Docker container to run the model more easily. To do this, follow these steps:

  • Pull the container from the GitHub Container Registry:

    docker pull ghcr.io/modeltc/lightllm:main
  • Run the container with GPU support and port mapping:

    docker run -it --gpus all -p 8080:8080                  \
            -v your_local_path:/data/                       \
            ghcr.io/modeltc/lightllm:main /bin/bash
  • Alternatively, you can build the container yourself:

    docker build -t <image_name> .
    docker run -it --gpus all -p 8080:8080                  \
            -v your_local_path:/data/                       \
            <image_name> /bin/bash
  • You can also use a helper script to launch both the container and the server:

    python tools/quick_launch_docker.py --help
  • Note: If you use multiple GPUs, you may need to increase the shared memory size by adding --shm-size to the docker run command.

Installation

  • Install from the source code by
python setup.py install

The code has been tested on a range of GPUs including A100, A800, 4090, and H800. If you are running the code on A100, A800, etc., we recommend using triton==2.0.0.dev20221202. If you are running the code on H800, etc., it is necessary to compile and install the source code of triton==2.1.0 from the GitHub repository. If the code doesn't work on other GPUs, try modifying the triton kernel used in model inference.

RUN LLaMA

With efficient Routers and TokenAttention, LightLLM can be deployed as a service and achieve the state-of-the-art throughput performance.

Launch the server:

python -m lightllm.server.api_server --model_dir /path/llama-7B     \
                                     --host 0.0.0.0                 \
                                     --port 8080                    \
                                     --tp 1                         \
                                     --max_total_token_num 120000

The parameter max_total_token_num is influenced by the GPU memory of the deployment environment. A larger value for this parameter allows for the processing of more concurrent requests, thereby increasing system concurrency. For more startup parameters, please refer to api_server.py or ApiServerArgs.md.

To initiate a query in the shell:

curl http://127.0.0.1:8080/generate     \
    -X POST                             \
    -d '{"inputs":"What is AI?","parameters":{"max_new_tokens":17, "frequency_penalty":1}}' \
    -H 'Content-Type: application/json'

To query from Python:

import time
import requests
import json

url = 'http://localhost:8080/generate'
headers = {'Content-Type': 'application/json'}
data = {
    'inputs': 'What is AI?',
    "parameters": {
        'do_sample': False,
        'ignore_eos': False,
        'max_new_tokens': 1024,
    }
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
    print(response.json())
else:
    print('Error:', response.status_code, response.text)

Performance

Service Performance

We compared the service performance of LightLLM and vLLM==0.1.2 on LLaMA-7B using an A800 with 80G GPU memory.

To begin, prepare the data as follows:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json

Launch the service:

python -m lightllm.server.api_server --model_dir /path/llama-7b --tp 1 --max_total_token_num 121060 --tokenizer_mode auto

Evaluation:

cd test
python benchmark_serving.py --tokenizer /path/llama-7b --dataset /path/ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 2000 --request-rate 200

The performance comparisons results are presented below:

vLLM LightLLM
Total time: 361.79 s
Throughput: 5.53 requests/s
Total time: 188.85 s
Throughput: 10.59 requests/s

Static inference performance

For debugging, we offer static performance testing scripts for various models. For instance, you can evaluate the inference performance of the LLaMA model by

cd test/model
python test_llama.py

FAQ

  • The LLaMA tokenizer fails to load.
    • consider resolving this by running the command pip install protobuf==3.20.0.
  • error : PTX .version 7.4 does not support .target sm_89
    • launch with bash tools/resolve_ptx_version python -m lightllm.server.api_server ...

Community

For further information and discussion, join our discord server.

License

This repository is released under the Apache-2.0 license.

Acknowledgement

We learned a lot from the following projects when developing LightLLM.

More Repositories

1

MQBench

Model Quantization Benchmark
Shell
742
star
2

United-Perception

United Perception
Python
427
star
3

llmc

This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
Python
184
star
4

Dipoorlet

Offline Quantization Tools for Deploy.
Python
109
star
5

awesome-lm-system

Summary of system papers/frameworks/codes/tools on training or serving large model
56
star
6

TFMQ-DM

[CVPR 2024 Highlight] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models".
Jupyter Notebook
50
star
7

mqbench-paper

Python
44
star
8

rank_dataset

PyTorch Dataset Rank Dataset
Python
37
star
9

NART

NART = NART is not A RunTime, a deep learning inference framework.
Python
37
star
10

EasyLLM

Built upon Megatron-Deepspeed and HuggingFace Trainer, EasyLLM has reorganized the code logic with a focus on usability. While enhancing usability, it also ensures training efficiency.
Python
35
star
11

Outlier_Suppression_Plus

Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Python
35
star
12

NNLQP

Python
33
star
13

QLLM

[ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models"
Python
33
star
14

LPCV2021_Winner_Solution

Python
29
star
15

pyvlova

Yet another Polyhedra Compiler for DeepLearning
Python
19
star
16

LPCV_2023_solution

Python
18
star
17

AAAI2023_EAMPD

AAAI2023 Efficient and Accurate Models towards Practical Deep Learning Baseline
13
star
18

Prototype

Python
12
star
19

L2_Compression

Python
11
star
20

OmniBal

Python
9
star
21

msbench

A tool for model sparse based on torch.fx
Python
7
star
22

Imagenet-S

Robustness for real-world system noise
Python
4
star
23

mtc-token-healing

Token healing implementation in Rust
Rust
3
star
24

FCPTS

Python
2
star
25

general-sam

A general suffix automaton implementation in Rust with Python bindings
Rust
2
star
26

statecs

Rust
1
star
27

general-sam-py

Python bindings for general-sam and some utilities
Python
1
star
28

pyrotom

Python Code Hotfix and Refactor on the fly
Python
1
star