• Stars
    star
    7,898
  • Rank 4,763 (Top 0.1 %)
  • Language
    C++
  • License
    MIT License
  • Created 11 months ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

High-speed Large Language Model Serving on PCs with Consumer-grade GPUs

PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU

TL;DR

PowerInfer is a CPU/GPU LLM inference engine leveraging activation locality for your device.

License: MIT

Project Kanban

Latest News ๐Ÿ”ฅ

  • [2023/12/24] We released an online gradio demo for Falcon(ReLU)-40B-FP16!
  • [2023/12/19] We officially released PowerInfer!

Demo ๐Ÿ”ฅ

powerinfer-live-demo.mp4

PowerInfer v.s. llama.cpp on a single RTX 4090(24G) running Falcon(ReLU)-40B-FP16 with a 11x speedup!

Both PowerInfer and llama.cpp were running on the same hardware and fully utilized VRAM on RTX 4090.

Note

Live Demo Onlineโšก๏ธ

Try out our Gradio server hosting Falcon(ReLU)-40B-FP16 on a RTX 4090!

Experimental and without warranties ๐Ÿšง

Abstract

We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power-law distribution in neuron activation.

This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity.

Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, only 18% lower than that achieved by a top-tier server-grade A100 GPU. This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy.

Features

PowerInfer is a high-speed and easy-to-use inference engine for deploying LLMs locally.

PowerInfer is fast with:

  • Locality-centric design: Utilizes sparse activation and 'hot'/'cold' neuron concept for efficient LLM inference, ensuring high speed with lower resource demands.
  • Hybrid CPU/GPU Utilization: Seamlessly integrates memory/computation capabilities of CPU and GPU for a balanced workload and faster processing.

PowerInfer is flexible and easy to use with:

  • Easy Integration: Compatible with popular ReLU-sparse models.
  • Local Deployment Ease: Designed and deeply optimized for local deployment on consumer-grade hardware, enabling low-latency LLM inference and serving on a single GPU.
  • Backward Compatibility: While distinct from llama.cpp, you can make use of most of examples/ the same way as llama.cpp such as server and batched generation. PowerInfer also supports inference with llama.cpp's model weights for compatibility purposes, but there will be no performance gain.

You can use these models with PowerInfer today:

  • Falcon-40B
  • Llama2 family

We have tested PowerInfer on the following platforms:

  • x86-64 CPU (with AVX2 instructions) on Linux
  • x86-64 CPU and NVIDIA GPU on Linux
  • Apple M Chips on macOS (As we do not optimize for Mac, the performance improvement is not significant now.)

And new features coming soon:

  • Mistral-7B model
  • Metal backend for sparse inference on macOS

Please kindly refer to our Project Kanban for our current focus of development.

Getting Started

Setup and Installation

Get the Code

git clone https://github.com/SJTU-IPADS/PowerInfer
cd PowerInfer
pip install -r requirements.txt # install Python helpers' dependencies

Build

In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project.

Using CMake(3.13+) on Linux or macOS:

  • If you have an NVIDIA GPU:
cmake -S . -B build -DLLAMA_CUBLAS=ON
cmake --build build --config Release
  • If you just CPU:
cmake -S . -B build
cmake --build build --config Release

Model Weights

PowerInfer models are stored in a special format called PowerInfer GGUF based on GGUF format, consisting of both LLM weights and predictor weights.

Download PowerInfer GGUF via Hugging Face

You can obtain PowerInfer GGUF weights at *.powerinfer.gguf as well as profiled model activation statistics for 'hot'-neuron offloading from each Hugging Face repo below.

Base Model PowerInfer GGUF
LLaMA(ReLU)-2-7B PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF
LLaMA(ReLU)-2-13B PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF
Falcon(ReLU)-40B PowerInfer/ReluFalcon-40B-PowerInfer-GGUF
LLaMA(ReLU)-2-70B PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF

We suggest downloading/cloning the whole repo so PowerInfer can automatically make use of such directory structure for feature-complete model offloading:

.
โ”œโ”€โ”€ *.powerinfer.gguf (Unquantized PowerInfer model)
โ”œโ”€โ”€ *.q4.powerinfer.gguf (INT4 quantized PowerInfer model, if available)
โ”œโ”€โ”€ activation (Profiled activation statistics for fine-grained FFN offloading)
โ”‚   โ”œโ”€โ”€ activation_x.pt (Profiled activation statistics for layer x)
โ”‚   โ””โ”€โ”€ ...
โ”œโ”€โ”€ *.[q4].powerinfer.gguf.generated.gpuidx (Generated GPU index at runtime for corresponding model)

Convert from Original Model Weights + Predictor Weights

Hugging Face limits single model weight to 50GiB. For unquantized models >= 40B, you can convert PowerInfer GGUF from the original model weights and predictor weights obtained from Hugging Face.

Base Model Original Model Predictor
LLaMA(ReLU)-2-7B SparseLLM/ReluLLaMA-7B PowerInfer/ReluLLaMA-7B-Predictor
LLaMA(ReLU)-2-13B SparseLLM/ReluLLaMA-13B PowerInfer/ReluLLaMA-13B-Predictor
Falcon(ReLU)-40B SparseLLM/ReluFalcon-40B PowerInfer/ReluFalcon-40B-Predictor
LLaMA(ReLU)-2-70B SparseLLM/ReluLLaMA-70B PowerInfer/ReluLLaMA-70B-Predictor

You can use the following command to convert the original model weights and predictor weights to PowerInfer GGUF:

# make sure that you have done `pip install -r requirements.txt`
python convert.py --outfile /PATH/TO/POWERINFER/GGUF/REPO/MODELNAME.powerinfer.gguf /PATH/TO/ORIGINAL/MODEL /PATH/TO/PREDICTOR
# python convert.py --outfile ./ReluLLaMA-70B-PowerInfer-GGUF/llama-70b-relu.powerinfer.gguf ./SparseLLM/ReluLLaMA-70B ./PowerInfer/ReluLLaMA-70B-Predictor

For the same reason, we suggest keeping the same directory structure as PowerInfer GGUF repos after conversion.

Inference

For CPU-only and CPU-GPU hybrid inference with all available VRAM, you can use the following instructions to run PowerInfer:

./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt
# ./build/bin/main -m ./ReluFalcon-40B-PowerInfer-GGUF/falcon-40b-relu.q4.powerinfer.gguf -n 128 -t 8 -p "Once upon a time"

If you want to limit the VRAM usage of GPU:

./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --vram-budget $vram_gb
# ./build/bin/main -m ./ReluLLaMA-7B-PowerInfer-GGUF/llama-7b-relu.powerinfer.gguf -n 128 -t 8 -p "Once upon a time" --vram-budget 8

Under CPU-GPU hybrid inference, PowerInfer will automatically offload all dense activation blocks to GPU, then split FFN and offload to GPU if possible.

Quantization

PowerInfer has optimized quantization support for INT4(Q4_0) models. You can use the following instructions to quantize PowerInfer GGUF model:

./build/bin/quantize /PATH/TO/MODEL /PATH/TO/OUTPUT/QUANTIZED/MODEL Q4_0
# ./build/bin/quantize ./ReluFalcon-40B-PowerInfer-GGUF/falcon-40b-relu.powerinfer.gguf ./ReluFalcon-40B-PowerInfer-GGUF/falcon-40b-relu.q4.powerinfer.gguf Q4_0

Then you can use the quantized model for inference with PowerInfer with the same instructions as above.

Evaluation

We evaluated PowerInfer vs. llama.cpp on a single RTX 4090(24G) with a series of FP16 ReLU models under inputs of length 64, and the results are shown below. PowerInfer achieves up to 11x speedup on Falcon 40B and up to 3x speedup on Llama 2 70B.

github-eval-4090 The X axis indicates the output length, and the Y axis represents the speedup compared with llama.cpp. The number above each bar indicates the end-to-end generation speed (total prompting + generation time / total tokens generated, in tokens/s).

We also evaluated PowerInfer on a single RTX 2080Ti(11G) with INT4 ReLU models under inputs of length 8, and the results are illustrated in the same way as above. PowerInfer achieves up to 8x speedup on Falcon 40B and up to 3x speedup on Llama 2 70B.

github-eval-2080ti-q4

Please refer to our paper for more evaluation details.

FAQs

  1. What if I encountered CUDA_ERROR_OUT_OF_MEMORY?

    • You can try to run with --reset-gpu-index argument to rebuild the GPU index for this model to avoid any stale cache.
    • Due to our current implementation, model offloading might not be as accurate as expected. You can try with --vram-budget with a slightly lower value or --disable-gpu-index to disable FFN offloading.
  2. Does PowerInfer support mistral, original llama, Qwen, ...?

    • Now we only support models with ReLU/ReGLU/Squared ReLU activation function. So we do not support these models now. It's worth mentioning that a paper has demonstrated that using the ReLU/ReGLU activation function has a negligible impact on convergence and performance.
  3. Why is there a noticeable downgrade in the performance metrics of our current ReLU model, particularly the 70B model?

    • In contrast to the typical requirement of around 2T tokens for LLM training, our model's fine-tuning was conducted with only 5B tokens. This insufficient retraining has resulted in the model's inability to regain its original performance. We are actively working on updating to a more capable model, so please stay tuned.
  4. What if...

    • Issues are welcomed! Please feel free to open an issue and attach your running environment and running parameters. We will try our best to help you.

TODOs

We will release the code and data in the following order, please stay tuned!

  • Release core code of PowerInfer, supporting Llama-2, Falcon-40B.
  • Support Mistral-7B
  • Support Windows
  • Support text-generation-webui
  • Release perplexity evaluation code
  • Support Metal for Mac
  • Release code for OPT models
  • Release predictor training code
  • Support online split for FFN network
  • Support Multi-GPU

Paper and Citation

More technical details can be found in our paper.

If you find PowerInfer useful or relevant to your project and research, please kindly cite our paper:

@misc{song2023powerinfer,
      title={PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU}, 
      author={Yixin Song and Zeyu Mi and Haotong Xie and Haibo Chen},
      year={2023},
      eprint={2312.12456},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Acknowledgement

We are thankful for the easily modifiable operator library ggml and execution runtime provided by llama.cpp. We also extend our gratitude to THUNLP for their support of ReLU-based sparse models. We also appreciate the research of Deja Vu, which inspires PowerInfer.

More Repositories

1

OS-Course-Lab

ๆœฌไป“ๅบ“ๅŒ…ๅซไธŠๆตทไบค้€šๅคงๅญฆIPADSๅฎž้ชŒๅฎค่ฎพ่ฎก็š„ๆ“ไฝœ็ณป็ปŸ่ฏพ็จ‹็ณปๅˆ—ๅฎž้ชŒใ€‚
C
234
star
2

ServerlessBench

A benchmark suite for serverless computing
C++
216
star
3

wukong

A graph-based distributed in-memory store that leverages efficient graph exploration to provide highly concurrent and low-latency queries over big linked data
C++
188
star
4

xstore

Fast RDMA-based Ordered Key-Value Store using Remote Learned Cache
C++
111
star
5

Bamboo

Bamboo-7B Large Language Model
88
star
6

reef

REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU scheduling.
Cuda
84
star
7

drtmh

Fast In-memory Transaction Processing using Hybrid RDMA Primitives
C++
67
star
8

drtm

Fast In-memory Transaction Processing using RDMA and HTM
C++
55
star
9

HEDB

Towards A Secure Yet Maintainable Encrypted Database
C++
55
star
10

DeSearch

a decentralized search engine with a decentralized verifiable dataflow
C
54
star
11

vegito

C++
51
star
12

krcore-artifacts

Ths is a fast RDMA abstraction layer that works both in the kernel and user-space.
Rust
49
star
13

disb

DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.
C++
46
star
14

gnnlab

A Factored System for Sample-based GNN Training over GPUs
Python
41
star
15

reef-artifacts

A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.
C
39
star
16

cocytus

Cocytus is an efficient and available in-memory K/V-store through hybrid erasure coding and replication
C
30
star
17

SQLSolver

An automated prover that verifies the equivalence of SQL queries
Java
27
star
18

librdpma

C++
25
star
19

ugache

C++
20
star
20

fgnn-artifacts

FGNN's artifact evaluation (EuroSys 2022)
Python
17
star
21

dst

A decentralized scalar timestamp scheme
C++
14
star
22

fisslock

A fast and scalable distributed lock service using programmable switches.
C++
12
star
23

wukong-cube

A distributed in-memory store for temporal knowledge graphs
C++
10
star
24

hackwrench

C++
9
star
25

ugache-artifacts

The artifact evaluation of SOSP 2023 for UGache
C++
8
star
26

COREMU

COREMU is a scalable and portable parallel full system emulator built on Qemu. Currently, COREMU supports X86_64 and ARM (MPcore).
C
7
star
27

eunomia

C++
6
star
28

wukong-g

Fast and Concurrent RDF Queries using RDMA-assisted GPU Graph Exploration
C++
5
star
29

Kernel-TwinFunction

1
star