• Stars
    star
    7,646
  • Rank 4,941 (Top 0.1 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 2 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)

🌐 Blog Download Model🪧 Demo✉️ Email📃 Paper [ICLR 2023]

💬 Google Group (Updates) or Wechat Group or Slack channel (Discussions)

GLM-130B: An Open Bilingual Pre-Trained Model

GLM-130B is an open bilingual (English & Chinese) bidirectional dense model with 130 billion parameters, pre-trained using the algorithm of General Language Model (GLM). It is designed to support inference tasks with the 130B parameters on a single A100 (40G * 8) or V100 (32G * 8) server. With INT4 quantization, the hardware requirements can further be reduced to a single server with 4 * RTX 3090 (24G) with almost no performance degradation. As of July 3rd, 2022, GLM-130B has been trained on over 400 billion text tokens (200B each for Chinese and English) and it has the following unique features:

  • Bilingual: supports both English and Chinese.
  • Performance (EN): better than GPT-3 175B (+4.0%), OPT-175B (+5.5%), and BLOOM-176B (+13.0%) on LAMBADA and slightly better than GPT-3 175B (+0.9%) on MMLU.
  • Performance (CN): significantly better than ERNIE TITAN 3.0 260B on 7 zero-shot CLUE datasets (+24.26%) and 5 zero-shot FewCLUE datasets (+12.75%).
  • Fast Inference: supports fast inference on both SAT and FasterTransformer (up to 2.5X faster) with a single A100 server.
  • Reproducibility: all results (30+ tasks) can be easily reproduced with open-sourced code and model checkpoints.
  • Cross-Platform: supports training and inference on NVIDIA, Hygon DCU, Ascend 910, and Sunway (Will be released soon).

This repository mainly focus on the evaluation of GLM-130B. If you find our work and our open-sourced efforts useful, ⭐️ to encourage our following development! :)

News

  • [2023.03.14] We are happy to introduce ChatGLM, a bilingual dialogue language model based on GLM-130B, and its open-sourced version ChatGLM-6B which can be run under only 6GB GPU memory!
  • [2023.01.21] GLM-130B has been accepted to ICLR 2023!
  • [2022.10.06] Our paper for GLM-130B is out!
  • [2022.08.24] We are proud to publish the quantized version for GLM-130B. While preserving the activation precision as FP16, the model weights can be quantized to as low as INT4 with almost no degradation of performance, further reducing the hardware requirements of the GLM-130B to a single server with 4 * RTX 3090 (24G)! See Quantization of GLM-130B for details.

For smaller models, please find monolingual GLMs (English: 10B/2B/515M/410M/335M/110M, Chinese: 10B/335M) and an 1B multilingual GLM (104 languages).

Getting Started

Environment Setup

Hardware

Hardware GPU Memory Quantization Weight Offload
8 * A100 40 GB No No
8 * V100 32 GB No Yes (BMInf)
8 * V100 32 GB INT8 No
8 * RTX 3090 24 GB INT8 No
4 * RTX 3090 24 GB INT4 No
8 * RTX 2080 Ti 11 GB INT4 No

It is recommended to use the an A100 (40G * 8) server, as all GLM-130B evaluation results (~30 tasks) reported can be easily reproduced with a single A100 server in about half a day. With INT8/INT4 quantization, efficient inference on a single server with 4 * RTX 3090 (24G) is possible, see Quantization of GLM-130B for details. Combining quantization and weight offloading techniques, GLM-130B can also be inferenced on servers with even smaller GPU memory, see Low-Resource Inference for details.

Software

The GLM-130B code is built on the top of SAT. We recommend using Miniconda to manage your environment and installing additional dependencies via pip install -r requirements.txt. Here are the recommended environment configurations:

  • Python 3.9+ / CUDA 11+ / PyTorch 1.10+ / DeepSpeed 0.6+ / Apex (installation with CUDA and C++ extensions is required, see here)
  • SwissArmyTransformer>=0.2.11 is required for quantization

Model weights

Download the GLM-130B’s model checkpoint from here, make sure all 60 chunks are downloaded completely, then use the following command to merge them into a single archive file and extract it:

cat glm-130b-sat.tar.part_* > glm-130b-sat.tar
tar xvf glm-130b-sat.tar

Set CHECKPOINT_PATH in configs/model_glm_130b.sh to the path of the extracted folder. Since the checkpoint file is up to 260G, it is recommended to use the SSD or RAM disk to reduce the checkpoint loading time. Since the checkpoint we distribute is in 8-way tensor parallel, a conversion scripts is also provided if you need to change the tensor parallel dimension.

python tools/convert_tp.py \
    --input-folder <SRC_CKPT_PATH>  \
    --output-folder <DST_CKPT_PATH> \
    --target-tp <TARGET_TP>

Left-To-Right Generation / Blank Filling

bash scripts/generate.sh --input-source interactive

You can also specify an input file by --input-source input.txt.

GLM-130B uses two different mask tokens: [MASK] for short blank filling and [gMASK] for left-to-right long text generation. When the input does not contain any MASK token, [gMASK] will be automatically appended to the end of the text.

Left-To-Right Generation Examples

Example 1

Input: Who is the greatest artist? The greatest artist is [gMASK]

Output: Who is the greatest artist? The greatest artist is not the one who paints the most beautiful picture, or writes the best poem, or plays the best symphony. The greatest artist is the one whose work has the greatest impact on the lives of others. The Greatest Artist is the artist who inspires us to be our best selves. The Great Artist is not the artist whose work is flawless. The great artist is not one who is perfect. The true great artist has the ability to make us feel something. The artist who makes us feel is the great artist. The art of the great artists is to make you feel something that you have never felt before. The greatness of an artist is in the ability of their work to touch the lives and hearts of others, and to make them feel something they have never experienced before. This is the true greatness of the artist. 2. The Significance of the Great Artist The significance of the greatest artist cannot be overstated. The significance and impact of the art of a great artist cannot even be measured. The impact of a single work of art can change the course of history. A single work can change a person’s life. A work of great art can make you laugh, make you cry, make your heart race, or make you think. A great work of artistic expression has the power to change the way you look at the world. The work of a truly great artist can change your life forever. 3. The Impact of the Greatest Artist The greatest artists have the ability and the power of their art to change people’s lives. They have the power and the ability, through their art, to make people feel something new. The power of the work of the truly great artists cannot be measured, but it can be felt. The greater the impact of an artists work, the greater the power it has to change lives. 4. The Influence of the Art of the Artist The art and the work created by the greatest artists has the potential to change a life. The influence of the artists work cannot be quantified, but is felt by those who experience it. The truly great work has an ability to touch people’ lives in a way that no other work can.

Example 2 (Chinese)

Input: 问题:冬天,中国哪座城市最适合避寒?问题描述:能推荐一些国内适合冬天避寒的城市吗?回答用户:旅游爱好者 回答: [gMASK]

Output: 问题:冬天,中国哪座城市最适合避寒?问题描述:能推荐一些国内适合冬天避寒的城市吗?回答用户:旅游爱好者 回答:避寒的话,建议去海南三亚。 三亚位于海南岛的最南端,是中国最南部的热带滨海旅游城市,中国海滨城市,是中国空气质量最好的城市、全国最长寿地区(平均寿命80岁)。三亚市别称鹿城,又被称为"东方夏威夷",位居中国四大一线旅游城市"三威杭厦"之首,拥有全岛最美丽的海滨风光。三亚拥有被无数城市嫉妒的清新空气,柔和海滩。在沙滩悠闲散步、沐浴傍晚温和阳光,在海边玩耍,在雨林里面呼吸健康,欣赏自然奇观,一切都是那么令人享受。 气候 三亚地处热带,属热带季风气候区域,终年气温高,寒暑变化不大,年平均气温25.5度。三亚最热的月份是6月,平均气温为28.5°C,极端最高气温为35.7°C ,三亚没有明显的冬季,而且冬季更是到海滨旅游的最好时候,所以三亚是全年全天候避寒、消暑、度假、旅游的好地方。不过,三亚最旺的旅游时间是从10月份到农历春节,这个时候的人特别多,房间也不好订,建议最好避开高峰。三亚的三个旅游高峰期是"五一"、"十一"、"春节",这三个节日期间的房价都会上浮几倍,如果你选择了这三个假期来三亚的话要提前一个星期定房和咨询。

Blank Filling Examples

Example 1

Input: Ng is an adjunct professor at [MASK] (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.

Output: Ng is an adjunct professor at Stanford University (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.

Example 2 (Chinese)

Input: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念[MASK]而建,门高25米,顶上矗立两武士青铜古兵车铸像。

Output: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念拿破仑胜利而建,门高25米,顶上矗立两武士青铜古兵车铸像。

Arguments Useful in Generation
  • --input-source [path] or "interactive" The input file's path. It can also be "interactive", which will launch a CLI.
  • —-output-path [path] The folder containing the results.
  • —-out-seq-length [int] The maximum sequence length for generation (including context).
  • —-min-gen-length [int] The minimum generation length for each MASK.
  • —-sampling-strategy "BaseStrategy" or "BeamSearchStrategy". The sampling strategy used.
    • For BeamSearchStrategy:
      • —-num-beams [int] The number of beams.
      • —-length-penalty [float] The maximum sequence length for generation (including context).
      • —-no-repeat-ngram-size [int] Prohibit repeated n-gram generation.
      • —-print-all-beam Print the generated results for all beams.
    • For BaseStrategy:
      • —-top-k [int] Top k sampling.
      • —-top-p [float] Top p sampling.
      • —-temperature [float] The sampling temperature.

Evaluation

We use the YAML file to define tasks. Specifically, you can add multiple tasks or folders at a time for evaluation, and the evaluation script will automatically collect all YAML files under those folders recursively.

bash scripts/evaluate.sh task1.yaml task2.yaml dir1 dir2 ...

Download our evaluation dataset here, and set DATA_PATH in scripts/evaluate.sh to your local dataset directory. The task folder contains the YAML files for 30+ tasks we evaluated for GLM-130B. Take the CoLA task for example, run bash scripts/evaluate.sh tasks/bloom/glue_cola.yaml, which outputs an accuracy of ~65% for the best prompt and ~57% for the median.

Expected Output
MultiChoiceTaskConfig(name='glue_cola', type=<TaskType.MULTICHOICE: 'mul'>, path='/thudm/LargeScale/data/zeroshot/bloom/glue_cola', module=None, metrics=['Accuracy'], use_task_mask=False, use_multitask_encoding=False, unidirectional=False, max_seq_length=2048, file_pattern={'validation': '**/validation.jsonl'}, micro_batch_size=8)
Evaluating task glue_cola:
  Evaluating group validation:
      Finish Following_sentence_acceptable/mul/validation.jsonl, Accuracy = 42.665
      Finish Make_sense_yes_no/mul/validation.jsonl, Accuracy = 56.951
      Finish Previous_sentence_acceptable/mul/validation.jsonl, Accuracy = 65.197
      Finish editing/mul/validation.jsonl, Accuracy = 57.622
      Finish is_this_correct/mul/validation.jsonl, Accuracy = 65.197
Evaluation results of task glue_cola:
  Group validation Accuracy: max = 65.197, median = 57.622, average = 57.526
Finish task glue_cola in 101.2s. 

Multi-node evaluation can be configured by setting HOST_FILE_PATH(required by the DeepSpeed lanucher) in scripts/evaluate_multiple_node.sh. Set DATA_PATH in scripts/evaluate_multiple_node.sh and run the following command to evaluate all the tasks in ./task directory.

bash scripts/evaluate_multiple_node.sh ./tasks

See Evaluate Your Own Tasks for details on how to add new tasks.

2.5X faster Inference using FasterTransformer

By adapting the GLM-130B model to FasterTransfomer, a highly optimized transformer model library by NVIDIA, we can reach up to 2.5X speedup on generation, see Inference with FasterTransformer for details.

License

This repository is licensed under the Apache-2.0 license. The use of GLM-130B model weights is subject to the Model License.

Citation

If you find our work useful, please consider citing GLM-130B:

@article{zeng2022glm,
  title={Glm-130b: An open bilingual pre-trained model},
  author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
  journal={arXiv preprint arXiv:2210.02414},
  year={2022}
}

You may also consider GLM's original work in your reference:

@inproceedings{du2022glm,
  title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
  author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
  booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={320--335},
  year={2022}
}

More Repositories

1

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Python
39,910
star
2

ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Python
15,602
star
3

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Python
12,976
star
4

CodeGeeX

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Python
7,924
star
5

CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model
Python
7,545
star
6

CogVLM

a state-of-the-art-level open visual language model | 多模态预训练模型
Python
5,582
star
7

VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Python
4,051
star
8

GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Python
3,600
star
9

CogVideo

Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
Python
3,542
star
10

GLM

GLM (General Language Model)
Python
3,093
star
11

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Python
2,081
star
12

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Python
1,952
star
13

CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Python
1,842
star
14

CogDL

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Python
1,710
star
15

CogView

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Python
1,628
star
16

WebGLM

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)
Python
1,532
star
17

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs
Python
1,266
star
18

ImageReward

[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Python
1,017
star
19

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Python
953
star
20

CogView2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
Python
932
star
21

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Python
911
star
22

LongBench

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Python
533
star
23

AutoWebGLM

An LLM-based Web Navigating Agent (KDD'24)
Python
521
star
24

GATNE

Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Python
519
star
25

CogQA

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Python
454
star
26

GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Python
437
star
27

GCC

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020
Python
320
star
28

MathGLM

Official Pytorch Implementation for MathGLM
Python
313
star
29

HGB

Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.
Python
299
star
30

Inf-DiT

Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Python
277
star
31

ComiRec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
Python
275
star
32

AlignBench

大模型多维度中文对齐评测基准 (ACL 2024)
Python
247
star
33

RelayDiffusion

The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
Python
245
star
34

KOBE

Towards Knowledge-Based Personalized Product Description Generation in E-commerce @ KDD 2019
Python
237
star
35

NLP4Rec-Papers

Paper list of NLP for recommender systems
226
star
36

ProNE

Source code and dataset for IJCAI 2019 paper "ProNE: Fast and Scalable Network Representation Learning"
Python
225
star
37

Chinese-Transformer-XL

Python
214
star
38

GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
Python
201
star
39

LongAlign

LongAlign: A Recipe for Long Context Alignment Encompassing Data, Training, and Evaluation
Python
150
star
40

icetk

A unified tokenization tool for Images, Chinese and English.
Python
146
star
41

CogCoM

Jupyter Notebook
140
star
42

KBRD

Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Python
134
star
43

GraphMAE2

GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner in WWW'23
Python
124
star
44

iPrompt

Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting
Python
120
star
45

MCNS

Source code and dataset for KDD 2020 paper "Understanding Negative Sampling in Graph Representation Learning"
Python
111
star
46

ProteinLM

Protein Language Model
Python
108
star
47

grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Python
91
star
48

GraphSGAN

Implementation of "GraphSGAN", a GAN-based semi-supervised learning algorithm for graph data.
Python
85
star
49

kgTransformer

kgTransformer: pre-training for reasoning over complex KG queries (KDD 22)
Python
82
star
50

ScenarioMeta

Source code and dataset for KDD 2019 paper "Sequential Scenario-Specific Meta Learner for Online Recommendation"
Python
80
star
51

OAG-BERT

A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)
75
star
52

CogKR

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Python
71
star
53

ReST-MCTS

ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search
Python
71
star
54

SelfKG

Codes for WWW2022 accepted paper: SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
Python
67
star
55

ChatGLM-Math

Python
67
star
56

FewNLU

Python
66
star
57

Multilingual-GLM

The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective
Python
63
star
58

XDAI

Python
61
star
59

SciGLM

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning
Python
59
star
60

OAG

Source code and dataset for KDD 2019 paper "OAG: Toward Linking Large-scale Heterogeneous Entity Graphs"
Python
59
star
61

CogAgent

47
star
62

Graph-Reading-Group

Daily reading group on graphs at KEG
45
star
63

SCR

SCR: Training Graph Neural Networks with Consistency Regularization
Python
38
star
64

NaturalCodeBench

Python
37
star
65

FastLDM

Inference speed-up for stable-diffusion (ldm) with TensorRT.
Python
34
star
66

WhoIsWho

KDD'23 Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark, Leaderboard, and Toolkit
Python
33
star
67

AutoRE

Python
32
star
68

GRAND-plus

Code and dataset for paper "GRAND+: Scalable Graph Random Neural Networks"
Python
31
star
69

LVBench

LVBench: An Extreme Long Video Understanding Benchmark
Python
30
star
70

GraphCAD

TKDE'22-GraphCAD: https://arxiv.org/pdf/2108.07516.pdf
Python
30
star
71

KDD-Industrial-Papers

A list of recent industrial papers in KDD'16–'18
29
star
72

ApeGNN

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
Python
22
star
73

GLM-iprompt

Apply Iprompt on GLM with innovative new methods. Currently support Chinese QA, English QA and Chinese poem generation.
Python
21
star
74

GIAAD

Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track
Python
21
star
75

Tsinghua-ML-Course

Course Materials for ML Course at Tsinghua
HTML
21
star
76

HOSMEL

A task relevant entity linking toolkit
Python
20
star
77

tdgia

code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)
Python
18
star
78

BatchSampler

The source code for BatchSampler that accepted in KDD'23
Python
18
star
79

MRT

MRT: Tracing the Evolution of Scientific Publications (TKDE 2021)
17
star
80

eTrust

Source code and dataset for TKDE 2019 paper “Trust Relationship Prediction in Alibaba E-Commerce Platform”
C++
16
star
81

RecDCL

RecDCL: Dual Contrastive Learning for Recommendation (WWW'24, Oral)
Python
16
star
82

LargeScale

Python
15
star
83

Self-Contrast

Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
Python
15
star
84

Efficient-Head-Finetuning

Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Python
14
star
85

whoiswho-top-solutions

Python
14
star
86

MSAGPT

MSAGPT
Python
13
star
87

IGB

Source code and dataset for IJCAI 2022 paper "Rethinking the Setting of Semi-supervised Learning on Graphs"
Python
12
star
88

paper-source-trace

Python
12
star
89

citation-prediction

Python
8
star
90

scholar-profiling

Jupyter Notebook
6
star
91

OAG-AQA

Python
6
star
92

OAG-taxo

Python
5
star
93

Refined-cora-citeseer

5
star
94

DropConn

DropConn: Dropout Connection Based Random GNNs for Molecular Property Prediction (TKDE'24)
Python
5
star
95

OAG-entity-alignment

Python
5
star
96

STAM

Source code and dataset for WWW'22 paper "STAM: A Spatiotemporal Aggregation Method for Graph Neural Network-based Recommendation"
Python
4
star
97

open_clip_pix2struct

pix2struct version of open_clip
Jupyter Notebook
4
star
98

DistAlign-GNNs

Jupyter Notebook
3
star
99

RecNS

Source code and dataset for TKDE'22 paper "Region or Global? A Principle for Negative Sampling in Graph-based Recommendation"
Python
3
star
100

Paper-Rec

Python
2
star