• Stars
    star
    3,168
  • Rank 14,184 (Top 0.3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

GLM (General Language Model)

GLM

GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.

Please refer to our paper for a detailed description of GLM:

GLM: General Language Model Pretraining with Autoregressive Blank Infilling (ACL 2022)

Zhengxiao Du*, Yujie Qian*, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang (*: equal contribution)

News: We release ChatGLM-6B, an open pre-trained language model with 6 billion parameters optimized for Chinese QA and dialogue based on the GLM framework.

Pretrained Models

You can download the pretrained models used in the paper from OneDrive or Tsinghua-Cloud.

Name Params Language Corpus Objective File Config
GLM-Base 110M English Wiki+Book Token glm-base-blank.tar.bz2 model_blocklm_base.sh
GLM-Large 335M English Wiki+Book Token glm-large-blank.tar.bz2 model_blocklm_large.sh
GLM-Large-Chinese 335M Chinese WuDaoCorpora Token+Sent+Doc glm-large-chinese.tar.bz2 model_blocklm_large_chinese.sh
GLM-Doc 335M English Wiki+Book Token+Doc glm-large-generation.tar.bz2 model_blocklm_large_generation.sh
GLM-410M 410M English Wiki+Book Token+Doc glm-1.25-generation.tar.bz2 model_blocklm_1.25_generation.sh
GLM-515M 515M English Wiki+Book Token+Doc glm-1.5-generation.tar.bz2 model_blocklm_1.5_generation.sh
GLM-RoBERTa 335M English RoBERTa Token glm-roberta-large-blank.tar.bz2 model_blocklm_roberta_large.sh
GLM-2B 2B English Pile Token+Sent+Doc glm-2b.tar.bz2 model_blocklm_2B.sh
GLM-10B 10B English Pile Token+Sent+Doc Download model_blocklm_10B.sh
GLM-10B-Chinese 10B Chinese WuDaoCorpora Token+Sent+Doc Download model_blocklm_10B_chinese.sh

Unzip the downloaded file into a local folder and set CHECKPOINT_PATH in the corresponding scripts to the folder path.

Results

SuperGLUE

dev set, single model, single-task finetuning

Model COPA WSC RTE WiC CB MultiRC BoolQ ReCoRD
GLM-10B 98.0 95.2 93.1 75.7 98.7/98.2 88.1/63.3 88.7 94.4/94.0
DeBERTa-XXLarge-v2 97.0 - 93.5 - - 87.8/63.6 88.3 94.1/93.7

Seq2Seq

CNN/Daily Mail (test set, no additional data used)

Model ROUGE-1 ROUGE-2 ROUGE-L
GLM-10B 44.7 21.4 41.4
T5-11B 43.5 21.6 40.7
PEGASUS-Large 44.2 21.5 41.4
BART-Large 44.2 21.3 40.9

XSum (test set, no additional data used)

Model ROUGE-1 ROUGE-2 ROUGE-L
GLM-10B 48.9 25.7 40.4
PEGASUS-Large 47.2 24.6 39.3
BART-Large 45.1 22.3 37.3

Language Modeling

test set, zero-shot

Model LAMBADA (accuracy) Wikitext103 (perplexity)
GLM-10B (bi) 72.35 11.33
GLM-10B (uni) 67.18 12.22
GPT-2 52.66 17.48
Megatron-LM (8.3B) 66.51 10.81
Turing-NLG 67.98 10.21

Get Started

Hugging Face Hub

You can access GLM models via HuggingFace Hub. Please install transformers>=4.23.1 and find all the available models here.

Generation

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-10b", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("THUDM/glm-10b", trust_remote_code=True)
model = model.half().cuda()
model.eval()

# Inference
inputs = tokenizer("Ng is an adjunct professor at [MASK] (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.", return_tensors="pt")
inputs = tokenizer.build_inputs_for_generation(inputs, max_gen_length=512)
inputs = inputs.to('cuda')
outputs = model.generate(**inputs, max_length=512, eos_token_id=tokenizer.eop_token_id)
print(tokenizer.decode(outputs[0].tolist()))

# Training
inputs = tokenizer(
    ["Tsinghua University is located in [MASK].", "One minus one equals zero, is it correct? Answer: [MASK]"],
    return_tensors="pt", padding=True)
inputs = tokenizer.build_inputs_for_generation(inputs, targets=["Beijing", "No"], max_gen_length=8, padding=False)
inputs = inputs.to('cuda')
outputs = model(**inputs)
loss = outputs.loss
logits = outputs.logits

Classification

from transformers import AutoTokenizer, AutoModelForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-10b", trust_remote_code=True)
model = AutoModelForMultipleChoice.from_pretrained("THUDM/glm-10b", trust_remote_code=True)
model = model.half().cuda()
model.eval()

inputs = tokenizer(["Tsinghua University is located in [MASK].",
                    "One minus one equals zero, is it correct? Answer: [MASK]"], return_tensors="pt", padding=True)
choices = [["Beijing", "Shanghai"], ["Yes", "No"]]
inputs = tokenizer.build_inputs_for_multiple_choice(inputs, choices)
inputs = inputs.to('cuda')
outputs = model(**inputs)
logits = outputs.logits

You can also convert the finetuned checkpoints with scripts/convert_glm_checkpoint_to_transformers.py.

Docker Image

We prepare two docker images based on CUDA 10.2 and CUDA 11.2. You can pull the pre-built images from Docker Hub and run with docker v19.03+

docker run --gpus all --rm -it --ipc=host zxdu20/glm-cuda102

or replace glm-cuda102 with glm-cuda112.

You can also modify the image according to your requirements in docker/cuda102.dockerfile and build the image yourself

  docker build -f cuda102.dockerfile . -t glm-cuda102

Manual Installation

Please first install PyTorch (we use 1.7.0) and apex, and then install other dependencies by pip install -r requirements.txt

Clone this repo

git clone https://github.com/THUDM/GLM
cd GLM

Model Parallelism

If your encounter the CUDA out of memory error, which means you GPU memory is limited, you can try the model parallelism to divide the parameters into multiple GPUs. Take the two-way model parallelism as an example. First run change_mp.py to divide the checkpoint:

python change_mp.py path_to_the_checkpoint 2

Then update the checkpoint path in the model config file (such as config_tasks/model_blocklm_10B.sh) and change MP_SIZE in the script (such as scripts/ds_finetune_superglue.sh) to 2.

Usage

We provide scripts for finetuning GLM on some downstream tasks.

Left-to-Right Generation / Blank Filling (Interactive)

  • Change CHECKPOINT_PATH to your local path. Run the following script
bash scripts/generate_block.sh \
     config_tasks/model_blocklm_10B_chinese.sh

Some models (GLM-2B, GLM-10B, and GLM-10B-Chinese) use three different mask tokens: [MASK] for short blank filling, [sMASK] for sentence filling, and [gMASK] for left-to-right generation.

Examples

Usage of [MASK] (Entity Prediction):

Example1

Context: Ng is an adjunct professor at [MASK] (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.

GLM: the stanford university

Example2 (Chinese)

Context: ๅ‡ฏๆ—‹้—จไฝไบŽๆ„ๅคงๅˆฉ็ฑณๅ…ฐๅธ‚ๅคๅŸŽๅ กๆ—ใ€‚1807ๅนดไธบ็บชๅฟต[MASK]่€Œๅปบ๏ผŒ้—จ้ซ˜25็ฑณ๏ผŒ้กถไธŠ็Ÿ—็ซ‹ไธคๆญฆๅฃซ้’้“œๅคๅ…ต่ฝฆ้“ธๅƒใ€‚

GLM:ๆ‹ฟ็ ดไป‘ๅ†›้˜Ÿๆ”ปๅ…‹็ฑณๅ…ฐๅŸŽ

Usage of [sMASK] (Sentence Prediction)

Example3

Context: There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). [sMASK] We propose a General Language Model ( GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25ร— parameters of BERT Large, demonstrating its generalizability to different downstream tasks.

GLM: However, there is a growing need to develop a single pretraining model that is not only good at natural language understanding (NLU) or dialog generation/generation (dialog), but is also able to predict other tasks such as sentiment analysis, conditional generation, or machine translation (MT).

Example4 (Chinese)

Context: ๅทฅไธšไบ’่”็ฝ‘๏ผˆIndustrial Internet๏ผ‰ๆ˜ฏๆ–ฐไธ€ไปฃไฟกๆฏ้€šไฟกๆŠ€ๆœฏไธŽๅทฅไธš็ปๆตŽๆทฑๅบฆ่žๅˆ็š„ๆ–ฐๅž‹ๅŸบ็ก€่ฎพๆ–ฝใ€ๅบ”็”จๆจกๅผๅ’Œๅทฅไธš็”Ÿๆ€๏ผŒ้€š่ฟ‡ๅฏนไบบใ€ๆœบใ€็‰ฉใ€็ณป็ปŸ็ญ‰็š„ๅ…จ้ข่ฟžๆŽฅ๏ผŒๆž„ๅปบ่ตท่ฆ†็›–ๅ…จไบงไธš้“พใ€ๅ…จไปทๅ€ผ้“พ็š„ๅ…จๆ–ฐๅˆถ้€ ๅ’ŒๆœๅŠกไฝ“็ณป๏ผŒไธบๅทฅไธšไนƒ่‡ณไบงไธšๆ•ฐๅญ—ๅŒ–ใ€็ฝ‘็ปœๅŒ–ใ€ๆ™บ่ƒฝๅŒ–ๅ‘ๅฑ•ๆไพ›ไบ†ๅฎž็Žฐ้€”ๅพ„๏ผŒๆ˜ฏ็ฌฌๅ››ๆฌกๅทฅไธš้ฉๅ‘ฝ็š„้‡่ฆๅŸบ็Ÿณใ€‚[sMASK] ๅฎƒไปฅ็ฝ‘็ปœไธบๅŸบ็ก€ใ€ๅนณๅฐไธบไธญๆžขใ€ๆ•ฐๆฎไธบ่ฆ็ด ใ€ๅฎ‰ๅ…จไธบไฟ้šœ๏ผŒๆ—ขๆ˜ฏๅทฅไธšๆ•ฐๅญ—ๅŒ–ใ€็ฝ‘็ปœๅŒ–ใ€ๆ™บ่ƒฝๅŒ–่ฝฌๅž‹็š„ๅŸบ็ก€่ฎพๆ–ฝ๏ผŒไนŸๆ˜ฏไบ’่”็ฝ‘ใ€ๅคงๆ•ฐๆฎใ€ไบบๅทฅๆ™บ่ƒฝไธŽๅฎžไฝ“็ปๆตŽๆทฑๅบฆ่žๅˆ็š„ๅบ”็”จๆจกๅผ๏ผŒๅŒๆ—ถไนŸๆ˜ฏไธ€็งๆ–ฐไธšๆ€ใ€ๆ–ฐไบงไธš๏ผŒๅฐ†้‡ๅก‘ไผไธšๅฝขๆ€ใ€ไพ›ๅบ”้“พๅ’Œไบงไธš้“พใ€‚ๅฝ“ๅ‰๏ผŒๅทฅไธšไบ’่”็ฝ‘่žๅˆๅบ”็”จๅ‘ๅ›ฝๆฐ‘็ปๆตŽ้‡็‚น่กŒไธšๅนฟๆณ›ๆ‹“ๅฑ•๏ผŒๅฝขๆˆๅนณๅฐๅŒ–่ฎพ่ฎกใ€ๆ™บ่ƒฝๅŒ–ๅˆถ้€ ใ€็ฝ‘็ปœๅŒ–ๅๅŒใ€ไธชๆ€งๅŒ–ๅฎšๅˆถใ€ๆœๅŠกๅŒ–ๅปถไผธใ€ๆ•ฐๅญ—ๅŒ–็ฎก็†ๅ…ญๅคงๆ–ฐๆจกๅผ๏ผŒ่ต‹่ƒฝใ€่ต‹ๆ™บใ€่ต‹ๅ€ผไฝœ็”จไธๆ–ญๆ˜พ็Žฐ๏ผŒๆœ‰ๅŠ›็š„ไฟƒ่ฟ›ไบ†ๅฎžไฝ“็ปๆตŽๆ่ดจใ€ๅขžๆ•ˆใ€้™ๆœฌใ€็ปฟ่‰ฒใ€ๅฎ‰ๅ…จๅ‘ๅฑ•ใ€‚

GLM: ๅทฅไธšไบ’่”็ฝ‘ๆ˜ฏๅˆถ้€ ไธšๆŠ€ๆœฏใ€็ฎก็†ใ€ๆจกๅผ็š„้‡ๅคงๅ˜้ฉ,ๆ˜ฏๆŽจๅŠจไบ’่”็ฝ‘ใ€ๅคงๆ•ฐๆฎใ€ไบบๅทฅๆ™บ่ƒฝๅ’Œๅฎžไฝ“็ปๆตŽๆทฑๅบฆ่žๅˆ็š„้‡่ฆ่ฝฝไฝ“,ๆ˜ฏๅปบ่ฎพๅˆถ้€ ๅผบๅ›ฝๅ’Œ็ฝ‘็ปœๅผบๅ›ฝ็š„้‡่ฆๅŸบ็ก€ใ€‚

Usage of [gMASK] (Long Text Generation)

Example5 (Chinese)

Context: ้—ฎ้ข˜๏ผšๅ†ฌๅคฉ๏ผŒไธญๅ›ฝๅ“ชๅบงๅŸŽๅธ‚ๆœ€้€‚ๅˆ้ฟๅฏ’๏ผŸ้—ฎ้ข˜ๆ่ฟฐ๏ผš่ƒฝๆŽจ่ไธ€ไบ›ๅ›ฝๅ†…้€‚ๅˆๅ†ฌๅคฉ้ฟๅฏ’็š„ๅŸŽๅธ‚ๅ—๏ผŸๅ›ž็ญ”็”จๆˆท๏ผšๆ—…ๆธธ็ˆฑๅฅฝ่€… ๅ›ž็ญ”๏ผš [gMASK]

GLM: ๆตทๅ—ไธ‰ไบš,ไธ‰ไบš็š„ๅ†ฌๅคฉๆ˜ฏไธ‰ไบš็š„ๆ—บๅญฃ,ๅ› ไธบๅ†ฌๅคฉๅ›ฝๅ†…ๅคง้ƒจๅˆ†ๅœฐๆ–น้ƒฝๆœ‰้›พ้œพ,่€Œไธ‰ไบšๅดๆ˜ฏๆฒกๆœ‰้›พ้œพ็š„ๅœฐๆ–น,ๆ‰€ไปฅไธ‰ไบšๆ˜ฏๅ†ฌๅคฉๆœ€ๆƒฌๆ„ไธ่ฟ‡็š„ๅœฐๆ–นไบ†ใ€‚ๅœจไธœๅŒ—้•ฟๅคง็š„ๆˆ‘่ง‰ๅพ—ไธ‰ไบš็ฎ€็›ดๅฐฑๆ˜ฏไธ€ไธชๅคฉๅ ‚,ๅ†ฌๅคฉๅŽปๆตทๅฃไธ‰ไบšๆ—…ๆธธ,ไบซๅ—ๅ†ฌๅคฉ,ไบซๅ—้˜ณๅ…‰ๆฒ™ๆปฉใ€‚ไฝ†ๆ˜ฏๆตทๅฃๅดๆ˜ฏๅพˆๅนฒ็‡ฅ,้œ€่ฆๅคšๅ–ๆฐดใ€‚ ไธ‰ไบšๅ†ฌๅคฉๆœ€็ƒญ้—จ็š„็Žฉๆณ•ๅฐฑๆ˜ฏๆ™’ๅคช้˜ณ,ๅœจๆตท่พนๆ™’ๆ™’ๅคช้˜ณ,ๅœจๆฒ™ๆปฉไธŠๆ™’ๆ™’ๅคช้˜ณ,ๆ„Ÿ่ง‰ๆ•ดไธชไบบ้ƒฝๅพ—ๅˆฐไบ†่งฃๆ”พใ€‚ไธ‰ไบš่ฟ˜ๆœ‰ไธ€ไธช็‰น่‰ฒ้กน็›ฎ,ๅฐฑๆ˜ฏๆตทไธŠๅ†ฒๆตช,ๅœจไธ‰ไบš็š„ๆฒ™ๆปฉไธŠๅ†ฒๆตชไนŸๆ˜ฏไธ€ไปถ้žๅธธๅˆบๆฟ€็š„ไบ‹ๆƒ…ใ€‚ ๆตทๅฃ,ๆตทๅฃๅ†ฌๅญฃ็š„้˜ณๅ…‰ๅๅˆ†ๆธฉๆš–,ๆตทๅ—็š„ๅ†ฌๅญฃไนŸๆ˜ฏๅฑžไบŽๅ†ฌๅญฃๆ—…ๆธธ็š„ๆ—บๅญฃใ€‚ๅ†ฌๅญฃ็š„ๆตทๅฃๆœ€ๆฃ’็š„ๆ˜ฏๅŽปๆตทๅ—็š„็ƒญๅธฆ้‡Ž็”ŸๅŠจๆค็‰ฉๅ›ญ,้‚ฃ้‡Œๆœ‰ๆ•ฐไน‹ไธๅฐฝ็š„็ƒญๅธฆๅฐๅŠจ็‰ฉ,ๅœจ่ฟ™้‡Œๅฏไปฅ่ฟ‘่ท็ฆป็š„ๅ’ŒๅฎƒไปฌๆŽฅ่งฆ,ๆตทๅ—็š„็ƒญๅธฆ้‡Ž็”ŸๅŠจๆค็‰ฉๅ›ญไนŸๆ˜ฏๆตทๅ—็š„ๅคฉ็„ถๆฐงๅงใ€‚่ฟ˜ๅฏไปฅๅœจๆตทๅฃ่ง‚ๆพœๆน–ๅ…ฌๅ›ญ้‡Œๆ„Ÿๅ—ๆตทๅฃ็พŽไธฝ็š„ๆตทๆ™ฏใ€‚ ่ดต้˜ณ,่ดตๅทž็š„ๅ†ฌๅคฉไนŸๆ˜ฏๅๅˆ†ๆธฉๆš–็š„,่ดต้˜ณไนŸๆ˜ฏๅ†ฌๅญฃ้ฟๅฏ’ๅพˆๅฅฝ็š„ๅŸŽๅธ‚ไน‹ไธ€ใ€‚ๅ†ฌๅญฃๅŽป่ดต้˜ณ็Žฉไธ€ๅฎš่ฆๅŽป้ป”็ตๅฑฑ,้ป”็ตๅฑฑๆ˜ฏ่ดตๅทž้ฆ™็ซๅพˆๆ—บ็››็š„ไธ€ไธชๅฏบๅบ™,ๅฏบๅบ™็š„ๅ†ฌๅญฃ้ฆ™็ซ้ผŽ็››,ๅœจๅ†ฌๅญฃๅŽปๅฏบๅบ™ๆธธ็ŽฉไนŸๆ˜ฏไธ€ไธชๅพˆๅฅฝ็š„ไฝ“้ชŒใ€‚้™คไบ†้ป”็ตๅฑฑ,่ดต้˜ณๅœจๅ†ฌๅญฃ่ฟ˜ๆœ‰่Šฑๆบชๅ…ฌๅ›ญๅฏไปฅๅŽป็Žฉ,่Šฑๆบชๅ…ฌๅ›ญไนŸๆ˜ฏๅŽปๅฝ“ๅœฐๅ…ฌๅ›ญ็Žฉๆœ€ๅฅฝ็š„้€‰ๆ‹ฉใ€‚ ้’ๅฒ›,้’ๅฒ›็š„ๅ†ฌๅคฉๆ˜ฏ้’ๅฒ›ๆœ€่ˆ’ๆœ็š„ๆ—ถๅ€™,้’ๅฒ›ๆœ‰ๅพˆๅคšๆตทๆปจๆตดๅœบ,ๅ†ฌๅคฉๅŽปๆตท่พนๆณกไธ€ๆณกๆธฉๆณ‰,็„ถๅŽๆ™’ๆ™’ๅคช้˜ณๆ˜ฏไธ€ไปถๅๅˆ†ๆƒฌๆ„็š„ไบ‹ๆƒ…ใ€‚้’ๅฒ›ไนŸๆœ‰ๆฒ™ๆปฉ,ๅ†ฌๅคฉๅœจๆฒ™ๆปฉไธŠๆ™’ๆ™’ๅคช้˜ณ,็œ‹็œ‹ๆตท,ๅ†็Žฉ็Žฉๆฒ™ๆปฉๆธธๆˆ,ๆ„Ÿ่ง‰ๅๅˆ†ๅฟซไน็š„ไบ‹ใ€‚

You can also add multiple [MASK] and [sMASK] in a single example. The model will fill the blanks one by one from left to right. The answer to each blank always begins with a special <|startofpiece|>.

Examples
Example1

Context: There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and [MASK] (e.g., T5). [sMASK] We propose a General Language Model ( GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over [MASK] on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and [MASK], GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25ร— parameters of BERT Large , demonstrating its generalizability to different downstream tasks.

GLM: <|startofpiece|> blank filling models<|startofpiece|> However, most of them cannot easily transfer to other downstream tasks due to the different characteristics of these tasks.<|startofpiece|> other pretrained models<|startofpiece|> unconditional reading, and semantic role labeling tasks

Example2 (Chinese)

Context: ๅทฅไธšไบ’่”็ฝ‘๏ผˆIndustrial Internet๏ผ‰ๆ˜ฏๆ–ฐไธ€ไปฃ[MASK]ไธŽ[MASK]ๆทฑๅบฆ่žๅˆ็š„ๆ–ฐๅž‹ๅŸบ็ก€่ฎพๆ–ฝใ€ๅบ”็”จๆจกๅผๅ’Œๅทฅไธš็”Ÿๆ€๏ผŒ้€š่ฟ‡ๅฏนไบบใ€ๆœบใ€็‰ฉใ€็ณป็ปŸ็ญ‰็š„ๅ…จ้ข่ฟžๆŽฅ๏ผŒๆž„ๅปบ่ตท่ฆ†็›–ๅ…จไบงไธš้“พใ€ๅ…จไปทๅ€ผ้“พ็š„ๅ…จๆ–ฐๅˆถ้€ ๅ’ŒๆœๅŠกไฝ“็ณป๏ผŒไธบๅทฅไธšไนƒ่‡ณไบงไธšๆ•ฐๅญ—ๅŒ–ใ€็ฝ‘็ปœๅŒ–ใ€ๆ™บ่ƒฝๅŒ–ๅ‘ๅฑ•ๆไพ›ไบ†ๅฎž็Žฐ้€”ๅพ„๏ผŒๆ˜ฏ็ฌฌๅ››ๆฌกๅทฅไธš้ฉๅ‘ฝ็š„้‡่ฆๅŸบ็Ÿณใ€‚[sMASK] ๅฎƒไปฅ็ฝ‘็ปœไธบๅŸบ็ก€ใ€ๅนณๅฐไธบไธญๆžขใ€ๆ•ฐๆฎไธบ่ฆ็ด ใ€ๅฎ‰ๅ…จไธบไฟ้šœ๏ผŒๆ—ขๆ˜ฏๅทฅไธšๆ•ฐๅญ—ๅŒ–ใ€็ฝ‘็ปœๅŒ–ใ€ๆ™บ่ƒฝๅŒ–่ฝฌๅž‹็š„ๅŸบ็ก€่ฎพๆ–ฝ๏ผŒไนŸๆ˜ฏไบ’่”็ฝ‘ใ€ๅคงๆ•ฐๆฎใ€ไบบๅทฅๆ™บ่ƒฝไธŽๅฎžไฝ“็ปๆตŽๆทฑๅบฆ่žๅˆ็š„ๅบ”็”จๆจกๅผ๏ผŒๅŒๆ—ถไนŸๆ˜ฏไธ€็งๆ–ฐไธšๆ€ใ€ๆ–ฐไบงไธš๏ผŒๅฐ†้‡ๅก‘ไผไธšๅฝขๆ€ใ€ไพ›ๅบ”้“พๅ’Œไบงไธš้“พใ€‚ๅฝ“ๅ‰๏ผŒๅทฅไธšไบ’่”็ฝ‘่žๅˆๅบ”็”จๅ‘ๅ›ฝๆฐ‘็ปๆตŽ้‡็‚น่กŒไธšๅนฟๆณ›ๆ‹“ๅฑ•๏ผŒๅฝขๆˆ[MASK]ใ€ๆ™บ่ƒฝๅŒ–ๅˆถ้€ ใ€[MASK]ใ€ไธชๆ€งๅŒ–ๅฎšๅˆถใ€ๆœๅŠกๅŒ–ๅปถไผธใ€ๆ•ฐๅญ—ๅŒ–็ฎก็†ๅ…ญๅคงๆ–ฐๆจกๅผ๏ผŒ่ต‹่ƒฝใ€่ต‹ๆ™บใ€่ต‹ๅ€ผไฝœ็”จไธๆ–ญๆ˜พ็Žฐ๏ผŒๆœ‰ๅŠ›็š„ไฟƒ่ฟ›ไบ†ๅฎžไฝ“็ปๆตŽๆ่ดจใ€ๅขžๆ•ˆใ€้™ๆœฌใ€็ปฟ่‰ฒใ€ๅฎ‰ๅ…จๅ‘ๅฑ•ใ€‚

GLM: <|startofpiece|>ไฟกๆฏๆŠ€ๆœฏ(ICT)<|startofpiece|>ๅทฅไธš็ปๆตŽ(II2O)<|startofpiece|>ๆˆ‘ๅ›ฝๅทฅไธšไบ’่”็ฝ‘ๆ˜ฏ้ขๅ‘ๅทฅไธšๅ…จ้ข†ๅŸŸใ€ๅ…จๆต็จ‹ใ€ๅ…จไฝ“็ณป็š„ไบ’่”็ฝ‘,ๅ…ทๆœ‰ๅคšไบงไธšใ€ๅคš้ข†ๅŸŸ่žๅˆ็š„็‰น็‚นใ€‚<|startofpiece|>็ฝ‘็ปœๅŒ–ๅๅŒ<|startofpiece|>ๅนณๅฐไผไธš

SuperGLUE

  • Download the SuperGlue data and check the experiment setup in scripts/ds_finetune_superglue.sh. Note that DATA_ROOT, CHECKPOINT_PATH, SAVE_PATH need to be changed to your local path. You may also change the batch-size and nproc_per_node according to your available hardware.

  • Run the following script (use the COPA dataset as an example)

bash scripts/ds_finetune_superglue.sh \
     config_tasks/model_blocklm_10B.sh \
     config_tasks/task_copa.sh
  • We also implement P-Tuning in our code. Run the following script to integrate p-tuning:
bash scripts/ds_finetune_superglue_prompt.sh \
     config_tasks/model_blocklm_10B.sh \
     config_tasks/task_copa.sh

Seq2Seq

  • Download the Gigaword , CNN/Daily Mail or XSum dataset and check the experiment setup in scripts/ds_finetune_seq2seq.sh. Change DATA_ROOT, CHECKPOINT_PATH, SAVE_PATH to your local path.

  • Run the following script (use the CNN/Daily Mail dataset as an example)

    bash scripts/ds_finetune_seq2seq.sh \ 
       config_tasks/model_blocklm_10B.sh \ 
       config_tasks/seq_cnndm_org.sh
    
  • The summaries are written into ./runs/experiment_name/test.jsonl.hyps. The references are written into test.jsonl.refs in the same directory. For calculating rouge, install file2rouge and download Stanford CoreNLP from here. Run the following script

    bash scripts/evaluate_seq2seq.sh \
     ./runs/experiment_name/test.jsonl.hyps ./runs/experiment_name/test.jsonl.refs
    

Train with your own data

Process your seq2seq data into {split}.source and {split}.target, with each line being the context or the target of a sample, and split being train, val, and test.

Run the following script

bash scripts/ds_finetune_seq2seq.sh \ 
   config_tasks/model_blocklm_10B.sh \ 
   config_tasks/seq_customization.sh

You can specify the hyperparameters in config_tasks/seq_customization.sh and config_tasks/config_blocklm_10B_cnndm.json

Multiple Choice (Zero-shot)

bash scripts/evaluate_multichoice.sh config_tasks/model_blocklm_10B.sh

Note that CHECKPOINT_PATH and DATA_PATH need to be changed to your local path.

The format of each line of the data file should be

{"inputs_pretokenized": "Context and question here", "choices_pretokenized": ["Choice 1", "Choice 2", "Choice 3"], "label": int}

Language Modeling

LAMBADA Cloze Accuracy

bash scripts/evaluate_lm.sh \ 
     config_tasks/model_blocklm_large_generation.sh \
     config_tasks/zero_lambada.sh 

LM Perplexity

Text Infilling

  • Download the Yahoo dataset and check the experiment setup in scripts/finetune_blank.sh. Change DATA_ROOT, CHECKPOINT_PATH, SAVE_PATH to your local path.

  • Run the following script

bash scripts/finetune_blank.sh \ 
     config_tasks/model_blocklm_large.sh \ 
     config_tasks/seq_blank.sh

Pretrain

Run the following script to pre-train the GLM-Large model

bash scripts/ds_pretrain_nvidia.sh config/ds_block_large.sh

The script scripts/ds_pretrain_nvidia.sh launches the training program with DeepSpeed. You should change NUM_WORKERS and NUM_GPUS_PER_WORKER to the number of workers and the number of gpus per worker. Also change HOST_FILE_PATH to the path to an OpenMPI-style hostfile. More details about DeepSpeed launcher can be found here.

The file config/ds_block_large.sh defines the hyperparameters for pretraining. Most of the arguments are fairly self-explanatory. Specifically, --train-data can be multiple keywords defined in NAMED_CORPORA in data_utils/corpora.py. The hyperparameters of the optimizer are defined in the corresponding json file under config. The semantics of the json file can be found here.

Citation

Part of the code is based on Megatron-LM and PET.

Please cite our paper if you find this code useful for your research:

@article{DBLP:conf/acl/DuQLDQY022,
  author    = {Zhengxiao Du and
               Yujie Qian and
               Xiao Liu and
               Ming Ding and
               Jiezhong Qiu and
               Zhilin Yang and
               Jie Tang},
  title     = {{GLM:} General Language Model Pretraining with Autoregressive Blank Infilling},
  booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational
               Linguistics (Volume 1: Long Papers), {ACL} 2022, Dublin, Ireland,
               May 22-27, 2022},
  pages     = {320--335},
  publisher = {Association for Computational Linguistics},
  year      = {2022},
}

More Repositories

1

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | ๅผ€ๆบๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
40,459
star
2

ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | ๅผ€ๆบๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
15,702
star
3

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | ๅผ€ๆบๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
13,366
star
4

CodeGeeX

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Python
8,150
star
5

CogVideo

text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Python
7,976
star
6

GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Python
7,653
star
7

CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model
Python
7,622
star
8

CogVLM

a state-of-the-art-level open visual language model | ๅคšๆจกๆ€้ข„่ฎญ็ปƒๆจกๅž‹
Python
5,913
star
9

GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | ๅผ€ๆบๅคš่ฏญ่จ€ๅคšๆจกๆ€ๅฏน่ฏๆจกๅž‹
Python
4,826
star
10

VisualGLM-6B

Chinese and English multimodal conversational language model | ๅคšๆจกๆ€ไธญ่‹ฑๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
4,076
star
11

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Python
2,144
star
12

CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Python
2,018
star
13

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Python
1,968
star
14

CogDL

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Python
1,720
star
15

CogView

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Python
1,691
star
16

WebGLM

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)
Python
1,557
star
17

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs
Python
1,339
star
18

CodeGeeX4

CodeGeeX4-ALL-9B, a versatile model for all AI software development scenarios, including code completion, code interpreter, web search, function calling, repository-level Q&A and much more.
Python
1,271
star
19

ImageReward

[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Python
1,117
star
20

LongWriter

LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
Python
1,076
star
21

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Python
966
star
22

CogView2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
Python
944
star
23

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Python
915
star
24

LongBench

[ACL 2024] LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Python
629
star
25

AutoWebGLM

An LLM-based Web Navigating Agent (KDD'24)
Python
584
star
26

GATNE

Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Python
522
star
27

GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Python
462
star
28

CogQA

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Python
456
star
29

Inf-DiT

Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Python
366
star
30

GCC

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020
Python
322
star
31

MathGLM

Official Pytorch Implementation for MathGLM
Python
316
star
32

HGB

Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.
Python
301
star
33

AlignBench

ๅคงๆจกๅž‹ๅคš็ปดๅบฆไธญๆ–‡ๅฏน้ฝ่ฏ„ๆต‹ๅŸบๅ‡† (ACL 2024)
Python
295
star
34

ComiRec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
Python
278
star
35

LongCite

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
Python
272
star
36

RelayDiffusion

The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
Python
262
star
37

KOBE

Towards Knowledge-Based Personalized Product Description Generation in E-commerce @ KDD 2019
Python
237
star
38

NLP4Rec-Papers

Paper list of NLP for recommender systems
225
star
39

ProNE

Source code and dataset for IJCAI 2019 paper "ProNE: Fast and Scalable Network Representation Learning"
Python
225
star
40

Chinese-Transformer-XL

Python
218
star
41

GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
Python
203
star
42

LongAlign

[EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs
Python
199
star
43

icetk

A unified tokenization tool for Images, Chinese and English.
Python
150
star
44

CogCoM

Jupyter Notebook
146
star
45

ReST-MCTS

ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)
Python
146
star
46

KBRD

Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Python
134
star
47

GraphMAE2

GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner in WWW'23
Python
133
star
48

iPrompt

Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting
Python
121
star
49

ProteinLM

Protein Language Model
Python
111
star
50

MCNS

Source code and dataset for KDD 2020 paper "Understanding Negative Sampling in Graph Representation Learning"
Python
111
star
51

VisualAgentBench

Towards Large Multimodal Models as Visual Foundation Agents
Python
94
star
52

CogView3

text to image to generation: CogView3-Plus and CogView3(ECCV 2024)
Python
93
star
53

grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Python
91
star
54

GraphSGAN

Implementation of "GraphSGAN", a GAN-based semi-supervised learning algorithm for graph data.
Python
85
star
55

kgTransformer

kgTransformer: pre-training for reasoning over complex KG queries (KDD 22)
Python
83
star
56

ScenarioMeta

Source code and dataset for KDD 2019 paper "Sequential Scenario-Specific Meta Learner for Online Recommendation"
Python
80
star
57

OAG-BERT

A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)
76
star
58

ChatGLM-Math

Python
75
star
59

CogKR

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Python
71
star
60

SelfKG

Codes for WWW2022 accepted paper: SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
Python
67
star
61

FewNLU

Python
65
star
62

SciGLM

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)
Python
62
star
63

Multilingual-GLM

The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective
Python
62
star
64

XDAI

Python
61
star
65

CogAgent

59
star
66

OAG

Source code and dataset for KDD 2019 paper "OAG: Toward Linking Large-scale Heterogeneous Entity Graphs"
Python
59
star
67

NaturalCodeBench

Python
54
star
68

LVBench

LVBench: An Extreme Long Video Understanding Benchmark
Python
52
star
69

AutoRE

Python
45
star
70

Graph-Reading-Group

Daily reading group on graphs at KEG
44
star
71

SCR

SCR: Training Graph Neural Networks with Consistency Regularization
Python
37
star
72

WhoIsWho

KDD'23 Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark, Leaderboard, and Toolkit
Python
34
star
73

FastLDM

Inference speed-up for stable-diffusion (ldm) with TensorRT.
Python
34
star
74

GraphCAD

TKDE'22-GraphCAD: https://arxiv.org/pdf/2108.07516.pdf
Python
30
star
75

GRAND-plus

Code and dataset for paper "GRAND+: Scalable Graph Random Neural Networks"
Python
30
star
76

KDD-Industrial-Papers

A list of recent industrial papers in KDD'16โ€“'18
28
star
77

ApeGNN

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
Python
23
star
78

GLM-iprompt

Apply Iprompt on GLM with innovative new methods. Currently support Chinese QA, English QA and Chinese poem generation.
Python
21
star
79

GIAAD

Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track
Python
21
star
80

Tsinghua-ML-Course

Course Materials for ML Course at Tsinghua
HTML
21
star
81

HOSMEL

A task relevant entity linking toolkit
Python
20
star
82

Self-Contrast

Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
Python
19
star
83

RecDCL

RecDCL: Dual Contrastive Learning for Recommendation (WWW'24, Oral)
Python
19
star
84

tdgia

code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)
Python
18
star
85

BatchSampler

The source code for BatchSampler that accepted in KDD'23
Python
18
star
86

MRT

MRT: Tracing the Evolution of Scientific Publications (TKDE 2021)
16
star
87

LargeScale

Python
15
star
88

eTrust

Source code and dataset for TKDE 2019 paper โ€œTrust Relationship Prediction in Alibaba E-Commerce Platformโ€
C++
15
star
89

MSAGPT

MSAGPT
Python
15
star
90

whoiswho-top-solutions

Python
14
star
91

paper-source-trace

Python
14
star
92

Efficient-Head-Finetuning

Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Python
13
star
93

IGB

Source code and dataset for IJCAI 2022 paper "Rethinking the Setting of Semi-supervised Learning on Graphs"
Python
10
star
94

BattleAgentBench

Python
9
star
95

GraphAlign

GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment
Python
8
star
96

APAR

APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
Python
8
star
97

scholar-profiling

Jupyter Notebook
7
star
98

citation-prediction

Python
7
star
99

OpenWebAgent

A convenient framework for developing LLM- and LMM-based web agents.
JavaScript
6
star
100

OAG-AQA

Python
6
star