• Stars
    star
    471
  • Rank 93,216 (Top 2 %)
  • Language
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Polyglot: Large Language Models of Well-balanced Competence in Multi-languages

Polyglot: Large Language Models of Well-balanced Competence in Multi-languages

1. Introduction

Why another multilingual model?

Various multilingual models such as mBERT, BLOOM, and XGLM have been released. Therefore, someone might ask, "why do we need to make multilingual models again?" Before answering the question, we would like to ask, "Why do people around the world make monolingual models in their language even though there are already many multilingual models?" We would like to point out there is a dissatisfaction with the non-English language performance of the current multilingual models as one of the most significant reason. So we want to make multilingual models with higher non-English language performance. This is the reason we need to make multilingual models again and why we name them 'Polyglot'.

2. Projects

1) Polyglot-Ko [DONE]

When we started our research, we have already had 1.2TB of Korean data collected by TUNiB. Before we collected a large amount of multilingual data, we decided to try Korean modeling with the dataset we already had. This Korean model can be used for performance comparison with the multilingual models, and this model itself would help many Korean companies and researchers.

Size Training Status Model Card Model Checkpoints Demo
1.3B Finished Available Available Available
3.8B Finished Available Available N/A
5.8B Finished Available Available N/A
12.8B Finished Available Available N/A

💡 We are collaborating with KoAlpaca team which is creating a series of Korean instruct fine-tuned models. As a result, we were able to release the Koalapca-Polyglot models. Please refer to here to see more details.

3. Limitations and Biases

Polyglot has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.

4. Citation and Related Information

BibTeX entry

If you find our work useful, please consider citing:

@misc{ko2023technical,
      title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models}, 
      author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park},
      year={2023},
      eprint={2306.02254},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Licensing

All our models are licensed under the terms of the Apache License 2.0.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

However, the model has the potential to generate unpredictable text as mentioned. Therefore, we are not responsible for any damages resulting from the use of the model.

Acknowledgement

This project was made possible thanks to the computing resources from Stability.ai, thanks to TUNiB for providing a large-scale Korean dataset.

More Repositories

1

gpt-neo

An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
Python
8,224
star
2

gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Python
6,829
star
3

lm-evaluation-harness

A framework for few-shot evaluation of language models.
Python
6,268
star
4

pythia

The hub for EleutherAI's work on interpretability and learning dynamics
Jupyter Notebook
2,193
star
5

the-pile

Python
1,459
star
6

math-lm

Python
1,035
star
7

cookbook

Deep learning for dummies. All the practical details and useful utilities that go into working with real models.
Python
635
star
8

DALLE-mtf

Open-AI's DALL-E for large scale training in mesh-tensorflow.
Python
434
star
9

vqgan-clip

Jupyter Notebook
345
star
10

sae

Sparse autoencoders
Python
274
star
11

concept-erasure

Erasing concepts from neural representations with provable guarantees
Python
207
star
12

elk

Keeping language models honest by directly eliciting knowledge encoded in their activations.
Python
186
star
13

oslo

OSLO: Open Source for Large-scale Optimization
Python
173
star
14

lm_perplexity

Python
144
star
15

knowledge-neurons

A library for finding knowledge neurons in pretrained transformer models.
Python
142
star
16

pyfra

Python Research Framework
Python
107
star
17

dps

Data processing system for polyglot
Python
88
star
18

openwebtext2

Python
86
star
19

info

(Deprecated) A hub for onboarding & other information.
78
star
20

improved-t5

Experiments for efforts to train a new and improved t5
Python
76
star
21

stackexchange-dataset

Python tools for processing the stackexchange data dumps into a text dataset for Language Models
Python
73
star
22

project-menu

See the issue board for the current status of active and prospective projects!
65
star
23

magiCARP

One stop shop for all things carp
Python
58
star
24

sae-auto-interp

Python
53
star
25

semantic-memorization

Jupyter Notebook
44
star
26

tqdm-multiprocess

Using queues, tqdm-multiprocess supports multiple worker processes, each with multiple tqdm progress bars, displaying them cleanly through the main process. It offers similar functionality for python logging.
Python
41
star
27

aria

Python
37
star
28

hae-rae

32
star
29

rnngineering

Engineering the state of RNN language models (Mamba, RWKV, etc.)
Jupyter Notebook
31
star
30

features-across-time

Understanding how features learned by neural networks evolve throughout training
Python
30
star
31

mp_nerf

Massively-Parallel Natural Extension of Reference Frame
Jupyter Notebook
29
star
32

elk-generalization

Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from easy questions to hard
Python
23
star
33

pile-pubmedcentral

A script for collecting the PubMed Central dataset in a language modelling friendly format.
Python
22
star
34

best-download

URL downloader supporting checkpointing and continuous checksumming.
Python
19
star
35

polyglot-data

data related codebase for polyglot project
Python
19
star
36

aria-amt

Efficient and robust implementation of seq-to-seq automatic piano transcription.
Python
18
star
37

text-generation-testing-ui

Web app for demoing the EAI models
JavaScript
16
star
38

exploring-contrastive-topology

Jupyter Notebook
16
star
39

mdl

Minimum Description Length probing for neural network representations
Python
15
star
40

pile_dedupe

Pile Deduplication Code
Python
15
star
41

w2s

Python
15
star
42

pilev2

Python
13
star
43

distilling

Experiments with distilling large language models.
Python
13
star
44

tokengrams

Efficiently computing & storing token n-grams from large corpora
Rust
13
star
45

lm-eval2

Python
11
star
46

equivariance

A framework for implementing equivariant DL
Jupyter Notebook
10
star
47

radioactive-lab

Adapting the "Radioactive Data" paper to work for text models
Python
9
star
48

pile-literotica

Download, parse, and filter data from Literotica. Data-ready for The-Pile.
Python
8
star
49

hn-scraper

Python
8
star
50

tagged-pile

Part-of-Speech Tagging for the Pile and RedPajama
Python
8
star
51

multimodal-fid

Python
7
star
52

pile-uspto

A script for collecting the USPTO Backgrounds dataset in a language modelling friendly format.
Python
7
star
53

pile-cc-filtering

The code used to filter CC data for The Pile
Python
6
star
54

minetest-baselines

Baseline agents for Minetest tasks.
Python
6
star
55

CodeCARP

Data collection pipeline for CodeCARP. Includes PyCharm plugins.
6
star
56

pile-enron-emails

A script for collecting the Enron Emails dataset in a language modelling friendly format.
Python
6
star
57

pile-explorer

For exploring the data and documenting its limitations
Python
5
star
58

minetest-interpretabilty-notebook

Jupyter notebook for the interpretablity section of the minetester blog post
Jupyter Notebook
5
star
59

thonkenizers

yes
5
star
60

eleutherai.github.io

This is the Hugo generated website for eleuther.ai. The source of this build is new-website repo.
HTML
5
star
61

visual-grounding

Visually ground GPT-Neo 1.3b and 2.7b
Python
5
star
62

LLM-Markov-Chains

Project github for LLM Markov Chains Project
5
star
63

architecture-experiments

Repository to host architecture experiments and development using Paxml and Praxis
Python
5
star
64

llemma-sample-explorer

Sample explorer tool for the Llemma models.
HTML
5
star
65

lm-scope

Jupyter Notebook
4
star
66

latent-video-diffusion

Latent video diffusion
Python
4
star
67

megatron-3d

Python
4
star
68

website

New website for EleutherAI based on Hugo static site generator
HTML
4
star
69

Unpaired-Image-Generation

Project Repo for Unpaired Image Generation project
4
star
70

ccs

Python
4
star
71

isaac-mchorse

EleutherAI's discord bot
Python
3
star
72

pile-allpoetry

Scraper to gather poems from allpoetry.com
Python
3
star
73

EvilModel

A replication of "EvilModel 2.0: Bringing Neural Network Models into Malware Attacks"
3
star
74

eai-prompt-gallery

Library of interesting prompt generations
JavaScript
3
star
75

variance-across-time

Studying the variance in neural net predictions across training time
Python
3
star
76

pile-ubuntu-irc

A script for collecting the Ubuntu IRC dataset in a language modelling friendly format.
Python
3
star
77

reddit-comment-processing

Python
2
star
78

eleutherai-instruct-dataset

A large instruct dataset for open-source models (WIP).
2
star
79

bucket-cleaner

A small utility to clear out old model checkpoints in Google Cloud Buckets whilst keeping tensorboard event files
Python
2
star
80

groupoid-rl

Jupyter Notebook
2
star
81

equinox-llama

Equinox implementation of llama3 and llama3.1
Python
2
star
82

optax-galore

Adds GaLore style projection wrappers to optax optimizers
Python
2
star
83

lang-filter

Filter text files or archives by language
Python
1
star
84

eleuther-blog

here is the generated content for the EleutherAI blog. Source is from new-website repo
HTML
1
star
85

prefix-free-tokenizer

A prefix free tokenizer
Python
1
star
86

alignment-reader

Search and filter through alignment literature
JavaScript
1
star
87

grouch

HTML
1
star
88

language-adaptation

1
star
89

perceptors

central location for access to pretrained models for CLIP and variants, with common API and out-of-the-box differentiable weighted multi-perceptor
1
star
90

pd-books

Jupyter Notebook
1
star
91

classifier-latent-diffusion

Python
1
star
92

common-llm-settings

Common LLM Settings App
JavaScript
1
star
93

bayesian-adam

Exactly what it says on the tin
Python
1
star
94

pile-cord19

A script for collecting the CORD-19 dataset in a language modelling friendly format.
Python
1
star
95

conceptual-constraints

Applying LEACE to models during training
Jupyter Notebook
1
star
96

ngrams-across-time

Jupyter Notebook
1
star
97

steering-llama3

Python
1
star
98

truncated-gaussian

Method-of-moments estimation and sampling for truncated multivariate Gaussian distributions
Python
1
star