• Stars
    star
    6,473
  • Rank 6,122 (Top 0.2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 8 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.

Important

bigdl-llm has now become ipex-llm (see the migration guide here); you may find the original BigDL project here.


πŸ’« IPEX-LLM

IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency1.

Note

ipex-llm Demo

See the demo of running Text-Generation-WebUI, local RAG using LangChain-Chatchat, llama.cpp and Ollama (on either Intel Core Ultra laptop or Arc GPU) with ipex-llm below.

Intel Core Ultra Laptop Intel Arc GPU
webui.mp4
rag.mp4
llama-cpp.mp4
ollama.mp4
Text-Generation-WebUI Local RAG using LangChain-Chatchat llama.cpp Ollama

Latest Update πŸ”₯

  • [2024/04] You can now run Llama 3 on Intel GPU using llama.cpp and ollama; see the quickstart here.
  • [2024/04] ipex-llm now supports Llama 3 on both Intel GPU and CPU.
  • [2024/04] ipex-llm now provides C++ interface, which can be used as an accelerated backend for running llama.cpp and ollama on Intel GPU.
  • [2024/03] bigdl-llm has now become ipex-llm (see the migration guide here); you may find the original BigDL project here.
  • [2024/02] ipex-llm now supports directly loading model from ModelScope (魔搭).
  • [2024/02] ipex-llm added initial INT2 support (based on llama.cpp IQ2 mechanism), which makes it possible to run large-size LLM (e.g., Mixtral-8x7B) on Intel GPU with 16GB VRAM.
  • [2024/02] Users can now use ipex-llm through Text-Generation-WebUI GUI.
  • [2024/02] ipex-llm now supports Self-Speculative Decoding, which in practice brings ~30% speedup for FP16 and BF16 inference latency on Intel GPU and CPU respectively.
  • [2024/02] ipex-llm now supports a comprehensive list of LLM finetuning on Intel GPU (including LoRA, QLoRA, DPO, QA-LoRA and ReLoRA).
  • [2024/01] Using ipex-llm QLoRA, we managed to finetune LLaMA2-7B in 21 minutes and LLaMA2-70B in 3.14 hours on 8 Intel Max 1550 GPU for Standford-Alpaca (see the blog here).
More updates

ipex-llm Quickstart

Install ipex-llm

  • Windows GPU: installing ipex-llm on Windows with Intel GPU
  • Linux GPU: installing ipex-llm on Linux with Intel GPU
  • Docker: using ipex-llm dockers on Intel CPU and GPU
  • For more details, please refer to the installation guide

Run ipex-llm

  • llama.cpp: running llama.cpp (using C++ interface of ipex-llm as an accelerated backend for llama.cpp) on Intel GPU
  • ollama: running ollama (using C++ interface of ipex-llm as an accelerated backend for ollama) on Intel GPU
  • vLLM: running ipex-llm in vLLM on both Intel GPU and CPU
  • FastChat: running ipex-llm in FastChat serving on on both Intel GPU and CPU
  • LangChain-Chatchat RAG: running ipex-llm in LangChain-Chatchat (Knowledge Base QA using RAG pipeline)
  • Text-Generation-WebUI: running ipex-llm in oobabooga WebUI
  • Benchmarking: running (latency and throughput) benchmarks for ipex-llm on Intel CPU and GPU

Code Examples

For more details, please refer to the ipex-llm document website.

Verified Models

Over 50 models have been optimized/verified on ipex-llm, including LLaMA/LLaMA2, Mistral, Mixtral, Gemma, LLaVA, Whisper, ChatGLM2/ChatGLM3, Baichuan/Baichuan2, Qwen/Qwen-1.5, InternLM and more; see the list below.

Model CPU Example GPU Example
LLaMA (such as Vicuna, Guanaco, Koala, Baize, WizardLM, etc.) link1, link2 link
LLaMA 2 link1, link2 link
LLaMA 3 link link
ChatGLM link
ChatGLM2 link link
ChatGLM3 link link
Mistral link link
Mixtral link link
Falcon link link
MPT link link
Dolly-v1 link link
Dolly-v2 link link
Replit Code link link
RedPajama link1, link2
Phoenix link1, link2
StarCoder link1, link2 link
Baichuan link link
Baichuan2 link link
InternLM link link
Qwen link link
Qwen1.5 link link
Qwen-VL link link
Aquila link link
Aquila2 link link
MOSS link
Whisper link link
Phi-1_5 link link
Flan-t5 link link
LLaVA link link
CodeLlama link link
Skywork link
InternLM-XComposer link
WizardCoder-Python link
CodeShell link
Fuyu link
Distil-Whisper link link
Yi link link
BlueLM link link
Mamba link link
SOLAR link link
Phixtral link link
InternLM2 link link
RWKV4 link
RWKV5 link
Bark link link
SpeechT5 link
DeepSeek-MoE link
Ziya-Coding-34B-v1.0 link
Phi-2 link link
Yuan2 link link
Gemma link link
DeciLM-7B link link
Deepseek link link
StableLM link link

Get Support

Footnotes

  1. Performance varies by use, configuration and other factors. ipex-llm may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex. ↩