• Stars
    star
    5,237
  • Rank 7,898 (Top 0.2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

OpenChat: Advancing Open-source Language Models with Imperfect Data

OpenChat: Advancing Open-source Language Models with Imperfect Data

Online Demo β€’ Discord β€’ Huggingface

OpenChat is a collection of open-source language models, optimized and fine-tuned with a strategy inspired by offline reinforcement learning. We use approximately 80k ShareGPT conversations, a conditioning strategy, and weighted loss to deliver outstanding performance, despite our simple approach. Our ultimate goal is to develop a high-performance, commercially available, open-source large language model, and we are continuously making strides towards this vision.

πŸ€– Ranked #1 among all open-source models on AgentBench

πŸ”₯ Ranked #1 among 13B open-source models | 89.5% win-rate on AlpacaEval | 7.19 score on MT-bench

πŸ•’ Exceptionally efficient padding-free fine-tuning, only requires 15 hours on 8xA100 80G

πŸ’² FREE for commercial use under Llama 2 Community License

DOI

News

Models

Our latest model, OpenChat 3.2 SUPER, is an enhanced version of the original OpenChat 3.2. We recommend using it for optimal conversational and instruction-following performance. Older versions are supported for a limited time for research purposes. All models are designed for English and have limited multilingual capabilities. They can be downloaded under the Llama 2 Community License.

To use these models, we highly recommend installing the OpenChat package by following the installation guide and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using vLLM and can run on a GPU with at least 48GB RAM or two consumer GPUs with tensor parallelism. To enable tensor parallelism, append --tensor-parallel-size 2 to the serving command.

Once started, the server listens at localhost:18888 for requests and is compatible with the OpenAI ChatCompletion API specifications. Please refer to the example request below for reference. Additionally, you can use the OpenChat Web UI for a user-friendly experience.

If you want to deploy the server as an online service, you can use --api-keys sk-KEY1 sk-KEY2 ... to specify allowed API keys and --disable-log-requests --disable-log-stats --log-file openchat.log for logging only to a file. For security purposes, we recommend using an HTTPS gateway in front of the server.

Example request (click to expand)
curl http://localhost:18888/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openchat_v3.2",
    "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
  }'
Model Size Context Weights Serving
OpenChat 3.2 SUPER 13B 4096 Huggingface python -m ochat.serving.openai_api_server --model-type openchat_v3.2 --model openchat/openchat_v3.2_super --engine-use-ray --worker-use-ray --max-num-batched-tokens 5120

For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below:

Conversation templates (click to expand)
# Single-turn V3.2 (SUPER)
tokenize("GPT4 User: Hello<|end_of_turn|>GPT4 Assistant:")
# Result: [1, 402, 7982, 29946, 4911, 29901, 15043, 32000, 402, 7982, 29946, 4007, 22137, 29901]

# Multi-turn V3.2 (SUPER)
tokenize("GPT4 User: Hello<|end_of_turn|>GPT4 Assistant: Hi<|end_of_turn|>GPT4 User: How are you today?<|end_of_turn|>GPT4 Assistant:")
# Result: [1, 402, 7982, 29946, 4911, 29901, 15043, 32000, 402, 7982, 29946, 4007, 22137, 29901, 6324, 32000, 402, 7982, 29946, 4911, 29901, 1128, 526, 366, 9826, 29973, 32000, 402, 7982, 29946, 4007, 22137, 29901]

Benchmarks

We have evaluated our models using the two most popular evaluation benchmarks **, including AlpacaEval and MT-bench. Here we list the top models with our released versions, sorted by model size in descending order. The full version can be found on the MT-bench and AlpacaEval leaderboards.

To ensure consistency, we used the same routine as ChatGPT / GPT-4 to run these benchmarks. We started the OpenAI API-compatible server and set the openai.api_base to http://localhost:18888/v1 in the benchmark program.

Model Size Context Dataset Size πŸ’²Free AlpacaEval (win rate %) MT-bench (win rate adjusted %) MT-bench (score)
v.s. text-davinci-003 v.s. ChatGPT
GPT-4 1.8T* 8K ❌ 95.3 82.5 8.99
ChatGPT 175B* 4K ❌ 89.4 50.0 7.94
Llama-2-70B-Chat 70B 4K 2.9M βœ… 92.7 60.0 6.86
OpenChat 3.2 SUPER 13B 4K 80K βœ… 89.5 57.5 7.19
Llama-2-13B-Chat 13B 4K 2.9M βœ… 81.1 55.3 6.65
WizardLM 1.2 13B 4K 196K βœ… 89.2 53.1 7.05
Vicuna 1.5 13B 2K 125K βœ… 78.8 37.2 6.57

*: Estimated model size

**: The benchmark metrics represent a quantified measure of a subset of the model's capabilities. A win-rate greater than 50% does not necessarily indicate that the model is better than ChatGPT in all scenarios or for all use cases. It is essential to consider the specific tasks or applications for which the model was evaluated and compare the results accordingly.

vLLM Eval

πŸš€ To ensure comprehensive evaluation of large language models (LLMs), we are working on developing a suite of accelerated standard benchmarks, including AGIEval, BBH, and Chain-of-Thought Hub, named vLLM Eval. This suite leverages the speedup provided by vLLM and allows us to finish the entire benchmark in just 5 minutes.

We will release the evaluation results as soon as they become available, so stay tuned!

Installation

To use OpenChat, you need to install CUDA and PyTorch, then you can install OpenChat via pip:

pip3 install ochat

If you want to train models, please also install FlashAttention 1.

pip3 install packaging ninja
pip3 install --no-build-isolation "flash-attn<2"

FlashAttention and vLLM may have compatibility issues. If you encounter these problems, you can try to create a new conda environment following the instructions below.

conda create -y --name openchat
conda activate openchat

conda install -y python
conda install -y cudatoolkit-dev -c conda-forge
pip3 install torch torchvision torchaudio

pip3 install packaging ninja
pip3 install --no-build-isolation "flash-attn<2"

pip3 install ochat
In addition to pypi, you can also install from source (click to expand)
git clone https://github.com/imoneoi/openchat
cd openchat

pip3 install --upgrade pip  # enable PEP 660 support
pip3 install -e .

Web UI

After lanuching the API server, you can interact with it using OpenChat-UI, which is a fork of Chatbot UI with support for OpenChat models.

To use OpenChat-UI, follow these steps:

  1. Clone the OpenChat-UI repo:
git clone https://github.com/imoneoi/openchat-ui.git
  1. Install Dependencies
npm i
  1. Set the API host to the local server (or the address of the OpenChat server)

Create a .env.local file in the root of the OpenChat-UI repo with the following content:

OPENAI_API_HOST=http://localhost:18888
OPENAI_API_KEY=openchat-dummy-key
NEXT_PUBLIC_DEFAULT_TEMPERATURE=0.7
  1. Run the App
npm run dev

Training

OpenChat leverages padding-free training and Multipack Sampler, achieving a 3~6x speedup compared to commonly-used padded training. V3 series can be trained in 15 hours on 8x A100 80GB.

The hyperparameters used in training the models are listed as follows:

Hyperparameter Context Batch size Learning rate AdamW betas AdamW eps Weight decay
Value 4096 64 Auto (0.9, 0.95) 1e-5 0.1

To train using 8xA100 80GB, you should first clone the dataset for training:

git lfs install
git clone https://huggingface.co/datasets/openchat/openchat_sharegpt_v3

Then, run the following commands for V3.2 SUPER:

Training commands (click to expand)
NUM_GPUS=8

deepspeed --num_gpus=$NUM_GPUS --module ochat.training_deepspeed.train \
    --model_type openchat_v3.2 \
    --model_path imone/LLaMA2_13B_with_EOT_token \
    --data_path openchat_sharegpt_v3/openchat_v3.2_super \
    --save_path PATH_TO_SAVE_MODEL \
    --epochs 5 \
    --batch_size_per_gpu 8 \
    --deepspeed \
    --deepspeed_config ochat/training_deepspeed/deepspeed_config.json

Please note that we added an EOT (end-of-turn) token to the Llama 2 base models. The embedding of the EOT token is initialized as the average of all existing token embeddings. The HF repo imone/LLaMA2_13B_with_EOT_token contains converted Llama weights with the aforementioned EOT token.

Limitations

Foundation Model Limitations Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:

  • Complex reasoning
  • Mathematical and arithmetic tasks
  • Programming and coding challenges

Hallucination of Non-existent Information OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.

License

Our OpenChat V3 models are licensed under the Llama 2 Community License. The code is distributed under the Apache License 2.0.

Contact

πŸ’Œ We are a student team from Tsinghua University, working on OpenChat, a project that requires additional computing power or LLMs API keys for further development. If you are interested in our project and would like to offer support, please feel free to reach out to us:

  • Wang Guan (Project Leader) [imonenext at gmail dot com]
  • Cheng Sijie [LeslieCheng0701 at outlook dot com]

We look forward to hearing from you and collaborating on this exciting project!

TODO

High-priority

  • Improving reasoning and math skills
  • Training larger LLaMA models

Low-priority

  • Mixing SFT data with pretraining data (e.g. RedPajama)
  • Extending context by interpolating RoPE (requires mixing with pretraining data)
  • Improving conversation splitting

Citation

@software{openchat,
  title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
  author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
  doi = {10.5281/zenodo.8105775},
  url = {https://github.com/imoneoi/openchat},
  version = {pre-release},
  year = {2023},
  month = {7},
}

Legacy Models

The following models are older versions of OpenChat and have inferior performance compared to the latest version. They will be deprecated in the next release. Please note that OpenChat V1 and V2 series are now deprecated, please install 3.1.x for using V1 and V2 models

To run the models on multiple GPUs with smaller VRAM, you can enable tensor parallelization, for example, using the --tensor-parallel-size 2 flag.

OpenChat V3 (click to expand)
Model Size Context Weights Serving
OpenChat 3.2 13B 4096 Huggingface python -m ochat.serving.openai_api_server --model-type openchat_v3.2 --model openchat/openchat_v3.2 --engine-use-ray --worker-use-ray --max-num-batched-tokens 5120
OpenChat 3.1 13B 4096 Huggingface python -m ochat.serving.openai_api_server --model-type openchat_v3.1_llama2 --model openchat/openchat_v3.1 --engine-use-ray --worker-use-ray --max-num-batched-tokens 5120

Acknowledgements

We would like to express our gratitude to GPT Desk Pte. Ltd., 01.AI company, and Tsinghua Laboratory of Brain and Intelligence (THBI) for their invaluable support.

We are also grateful to the developers of the following projects, which have contributed significantly to our research: Llama 2, self-instruct, FastChat (Vicuna), Alpaca and StarCoder.