• Stars
    star
    2,045
  • Rank 21,745 (Top 0.5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Russian GPT3 models.

Russian GPT-3 models

ruGPT3XL, ruGPT3Large, ruGPT3Medium, ruGPT3Small and ruGPT2Large

This repository contains bunch of autoregressive transformer language models trained on a huge dataset of russian language.

  • Russian GPT-3 models (ruGPT3XL, ruGPT3Large, ruGPT3Medium, ruGPT3Small) trained with 2048 sequence length with sparse and dense attention blocks. We also provide Russian GPT-2 large model (ruGPT2Large) trained with 1024 sequence length.

  • Try Model Generation In Colab! ruGPT-3 XL: Try Model Generation In Colab! or ruGPT-3 smaller models: Try Model Generation In Colab!

  • Usage examples are described in detail here. See how fine-tuning works: Try Model Generation In Colab!

Table of contents

ruGPT3XL

Setup

For colab we recommend use the following installation instructions:

export LD_LIBRARY_PATH=/usr/lib/
apt-get install clang-9 llvm-9 llvm-9-dev llvm-9-tools
git clone https://github.com/qywu/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
pip install triton
DS_BUILD_CPU_ADAM=1 DS_BUILD_SPARSE_ATTN=1 pip install deepspeed
pip install transformers
pip install huggingface_hub
pip install timm==0.3.2
git clone  https://github.com/sberbank-ai/ru-gpts
cp ru-gpts/src_utils/trainer_pt_utils.py /usr/local/lib/python3.8/dist-packages/transformers/trainer_pt_utils.py
cp ru-gpts/src_utils/_amp_state.py /usr/local/lib/python3.8/dist-packages/apex/amp/_amp_state.py

After installation env please restart colab. For checking is all ok, run the following commands:

!ds_report
# Output:
...
sparse_attn ............ [YES] ...... [OKAY]
...
import deepspeed.ops.sparse_attention.sparse_attn_op

Usage

Here is a simple example of usage. For more see this example or Open In Colab.

import sys
from src.xl_wrapper import RuGPT3XL
import os

# If run to from content root.
sys.path.append("ru-gpts/")
os.environ["USE_DEEPSPEED"] = "1"
# We can change address and port
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "5000"
gpt = RuGPT3XL.from_pretrained("sberbank-ai/rugpt3xl", seq_len=512)
gpt.generate(
    "Кто был президентом США в 2020? ",
    max_length=50,
    no_repeat_ngram_size=3,
    repetition_penalty=2.,
)

Finetuning

Example of finetune, load finetuned model and generate is here.

Our example of finetuning script here

Pretraining details ruGPT3XL

Model was trained with 512 sequence length using Deepspeed and Megatron code by Devices team, on 80B tokens dataset for 4 epochs. After that model was finetuned 1 epoch with sequence length 2048.
Note! Model has sparse attention blocks.

Total training time was around 10 days on 256 GPUs.
Final perplexity on test set is 12.05.

🤗HuggingFace model card link.

ruGPT3Large, ruGPT3Medium, ruGPT3Small, ruGPT2Large

Setup

For using ruGPT3Large, ruGPT3Medium, ruGPT3Small, ruGPT2Large just install 🤗HuggingFace transformers.

pip install transformers==4.24.0

Usage

Here we can obtain examples of finetuning or generation.

Also this examples is adapted for google colab:

  • finetuning: finetuning.
  • generation: generation
from transformers import GPT2LMHeadModel, GPT2Tokenizer


model_name_or_path = "sberbank-ai/rugpt3large_based_on_gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_name_or_path)
model = GPT2LMHeadModel.from_pretrained(model_name_or_path).cuda()
text = "Александр Сергеевич Пушкин родился в "
input_ids = tokenizer.encode(text, return_tensors="pt").cuda()
out = model.generate(input_ids.cuda())
generated_text = list(map(tokenizer.decode, out))[0]
print(generated_text)
# Output should be like this:
# Александр Сергеевич Пушкин родился в \n1799 году. Его отец был крепостным крестьянином, а мать – крепостной крестьянкой. Детство и юность Пушкина прошли в деревне Михайловское под Петербургом. В 1820-х годах семья переехала

Pretraining details

All pretraining was done on Nvidia Tesla V100-SXM3 32 Gb GPUs on a Christofari Cluster. Following are the details of pretraining for each model.

Pretraining details ruGPT3Large

Model was trained with sequence length 1024 using transformers lib by Devices team on 80B tokens for 3 epochs. After that model was finetuned 1 epoch with sequence length 2048.

Total training time was around 14 days on 128 GPUs for 1024 context and few days on 16 GPUs for 2048 context.
Final perplexity on test set is 13.6.

You can obtain this model by using transformers with model name sberbank-ai/rugpt3large_based_on_gpt2.

🤗HuggingFace model card link

Our pretraining script here

Pretraining details ruGPT3Medium

Model was trained with sequence length 1024 using transformers lib by Devices team on 80B tokens for 3 epoch. After that model was finetuned on 2048 context.

Total training time was around 16 days on 64 GPUs.
Final perplexity on test set is 17.4.

You can obtain this model by using transformers with model name sberbank-ai/rugpt3medium_based_on_gpt2.

🤗HuggingFace model card link

Our pretraining script here

Pretraining details ruGPT3Small

Model was trained with sequence length 1024 using transformers by Devices team on 80B tokens around 3 epoch. After that model was finetuned on 2048 context.

Total training time took around one week on 32 GPUs.

You can obtain this model by using transformers with model name sberbank-ai/rugpt3small_based_on_gpt2.

🤗HuggingFace model card link

Our pretraining script here

Pretraining details ruGPT2Large

Model was trained with sequence length 1024 using transformers by Devices team on 170Gb data on 64 GPUs 3 weeks.

You can obtain this model by using transformers with model name sberbank-ai/rugpt2large.

🤗HuggingFace model card link

OpenSource Solutions with ruGPT3

Papers mentioning ruGPT3

According to google scholar search - feel free to add links to this list

Text Simplification

@article{shatilovsentence,
  title={Sentence simplification with ruGPT3},
  author={Shatilov, AA and Rey, AI},
  url={http://www.dialog-21.ru/media/5281/shatilovaaplusreyai142.pdf}
}

@article{fenogenovatext,
  title={Text Simplification with Autoregressive Models},
  author={Fenogenova, Alena},
  url={http://www.dialog-21.ru/media/5250/fenogenovaa141.pdf}}

Text Detoxification

@article{dementieva2021methods,
  title={Methods for Detoxification of Texts for the Russian Language},
  author={Dementieva, Daryna and Moskovskiy, Daniil and Logacheva, Varvara and Dale, David and Kozlova, Olga and Semenov, Nikita and Panchenko, Alexander},
  journal={arXiv preprint arXiv:2105.09052},
  year={2021},
  url={https://arxiv.org/abs/2105.09052}
}

Paraphrasing and Data Augmentation

@inproceedings{fenogenova2021russian,
  title={Russian Paraphrasers: Paraphrase with Transformers},
  author={Fenogenova, Alena},
  booktitle={Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing},
  pages={11--19},
  year={2021},
  url={https://www.aclweb.org/anthology/2021.bsnlp-1.2.pdf}
}

Model Evaluation

@article{malykh2021morocco,
  title={MOROCCO: Model Resource Comparison Framework},
  author={Malykh, Valentin and Kukushkin, Alexander and Artemova, Ekaterina and Mikhailov, Vladislav and Tikhonova, Maria and Shavrina, Tatiana},
  journal={arXiv preprint arXiv:2104.14314},
  year={2021},
  url={https://arxiv.org/abs/2104.14314}}

More Repositories

1

Kandinsky-2

Kandinsky 2 — multilingual text2image latent diffusion model
Jupyter Notebook
2,699
star
2

ru-dalle

Generate images from texts. In Russian
Jupyter Notebook
1,638
star
3

ghost

A new one shot face swap approach for image and video domains
Python
1,030
star
4

ner-bert

BERT-NER (nert-bert) with google bert https://github.com/google-research.
Jupyter Notebook
403
star
5

ru-dolph

RUDOLPH: One Hyper-Tasking Transformer can be creative as DALL-E and GPT-3 and smart as CLIP
Jupyter Notebook
242
star
6

Real-ESRGAN

PyTorch implementation of Real-ESRGAN model
Python
201
star
7

mgpt

Multilingual Generative Pretrained Model
Jupyter Notebook
194
star
8

KandinskyVideo

KandinskyVideo — multilingual end-to-end text2video latent diffusion model
Python
140
star
9

ru-clip

CLIP implementation for Russian language
Jupyter Notebook
126
star
10

ruGPT3_demos

121
star
11

sage

SAGE: Spelling correction, corruption and evaluation for multiple languages
Jupyter Notebook
101
star
12

deforum-kandinsky

Kandinsky x Deforum — generating short animations
Python
100
star
13

digital_peter_aij2020

Materials of the AI Journey 2020 competition dedicated to the recognition of Peter the Great's manuscripts, https://ai-journey.ru/contest/task01
Jupyter Notebook
66
star
14

music-composer

Python
62
star
15

ru-prompts

Python
54
star
16

fusion_brain_aij2021

Creating multimodal multitask models
Jupyter Notebook
47
star
17

model-zoo

NLP model zoo for Russian
44
star
18

gigachat

Библиотека для доступа к GigaChat
Python
43
star
19

OCR-model

An easy-to-run OCR model pipeline based on CRNN and CTC loss
Python
42
star
20

augmentex

Augmentex — a library for augmenting texts with errors
Python
40
star
21

StackMix-OCR

Jupyter Notebook
37
star
22

MoVQGAN

MoVQGAN - model for the image encoding and reconstruction
Jupyter Notebook
35
star
23

MERA

MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating fundamental models.
Jupyter Notebook
31
star
24

tuned-vq-gan

Jupyter Notebook
28
star
25

ReadingPipeline

Text reading pipeline that combines segmentation and OCR-models.
Python
23
star
26

htr_datasets

Repository containing our datasets for HTR (handwritten text recognition) task.
Jupyter Notebook
23
star
27

fbc3_aij2023

Jupyter Notebook
20
star
28

mineral-recognition

Python
19
star
29

DigiTeller

18
star
30

fbc2_aij2022

FusionBrain Challenge 2.0: creating multimodal multitask model
Python
16
star
31

combined_solution_aij2019

AI Journey 2019: Combined Solution
Python
15
star
32

railway_infrastructure_detection_aij2021

AI Journey Contest 2021: AITrain
Python
13
star
33

no_fire_with_ai_aij2021

AI Journey Contest 2021: NoFireWithAI
Jupyter Notebook
13
star
34

SEGM-model

An easy-to-run semantic segmentation model based on Unet
Python
11
star
35

ControlledNST

An implementation of Neural Style Transfer in PyTorch.
Jupyter Notebook
8
star
36

kandinsky3-diffusers

Python
5
star
37

mchs-wildfire

Соревнование по классификации лесных пожаров
Jupyter Notebook
4
star
38

no_flood_with_ai_aij2020

Материалы соревнования AI Journey 2020, посвященного прогнозированию паводков на реке Амур, https://ai-journey.ru/contest/task02
Jupyter Notebook
4
star
39

paper_persi_chat

PaperPersiChat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management
Jupyter Notebook
1
star
40

Zoom_In_Video_Kandinsky

Framework for creating Zoom in / Zoom out video based on inpainting Kandinsky
Jupyter Notebook
1
star