YaRN
This repo contains the code and data for the YaRN context window extension method.
Preprint
Preprint v2 (arXiv): YaRN: Efficient Context Window Extension of Large Language Models
Models
LLaMA
We publish variants of Llama 2 fine-tuned with YaRN at 32K, 64K and 128K context window length. They are available under the Llama 2 license on ๐ค Hugging Face.
Size | Context | Link |
---|---|---|
7B | 64K | NousResearch/Yarn-Llama-2-7b-64k |
7B | 128K | NousResearch/Yarn-Llama-2-7b-128k |
13B | 64K | NousResearch/Yarn-Llama-2-13b-64k |
13B | 128K | NousResearch/Yarn-Llama-2-13b-128k |
70B | 32K | NousResearch/Yarn-Llama-2-70b-32k |
In addition, we also publish 8K context window versions of Llama 2 7B fine-tuned with NTK-aware and YaRN (Table 1 in the conference paper).
Mistral
With the release of v2 of our paper we are also publishing 64K and 128K variants of Mistral 7B v0.1.
Size | Context | Link |
---|---|---|
7B | 64K | NousResearch/Yarn-Mistral-7b-64k |
7B | 128K | NousResearch/Yarn-Mistral-7b-128k |
Reproduction
We strongly believe in open science, and thus publish all code and data to reproduce the results in our paper. To reproduce, clone the repository and perform a local installation.
git clone https://github.com/jquesnelle/yarn
cd yarn
pip install -e .
Training
To train the models, run accelerate config
and enable DeepSpeed acceleration. deepspeed/zero3.json
was the configuration file used for training.
# ./train.sh
The tokenized training data is available on ๐คHugging Face and was derived from the pg19 dataset. For the Mistral models, a mix of the pretrain and fine-tune splits of Long-Data-Collections was used and the tokenized dataset is also available on ๐คHugging Face.
Evaluation
To reproduce the evaluations, install lm-evaluation-harness with pip install git+https://github.com/EleutherAI/lm-evaluation-harness
and then run the two provided scripts.
# ./eval.sh
# ./eval-harness.sh
Citation
@misc{peng2023yarn,
title={YaRN: Efficient Context Window Extension of Large Language Models},
author={Bowen Peng and Jeffrey Quesnelle and Honglu Fan and Enrico Shippole},
year={2023},
eprint={2309.00071},
archivePrefix={arXiv},
primaryClass={cs.CL}
}