• Stars
    star
    7,022
  • Rank 5,326 (Top 0.2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 10 months ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

CodeGeeX2: A More Powerful Multilingual Code Generation Model

🏠 主页|🛠 插件 VS Code, Jetbrains|🤗 模型下载|📄 论文|👋 加入微信开发者交流群

Read this in English
日本語で読む
Lire en Français

CodeGeeX2: 更强大的多语言代码生成模型

CodeGeeX2 是多语言代码生成模型 CodeGeeX (KDD’23) 的第二代模型。不同于一代 CodeGeeX(完全在国产华为昇腾芯片平台训练) ,CodeGeeX2 是基于 ChatGLM2 架构加入代码预训练实现,得益于 ChatGLM2 的更优性能,CodeGeeX2 在多项指标上取得性能提升(+107% > CodeGeeX;仅60亿参数即超过150亿参数的 StarCoder-15B 近10%),更多特性包括:

  • 更强大的代码能力:基于 ChatGLM2-6B 基座语言模型,CodeGeeX2-6B 进一步经过了 600B 代码数据预训练,相比一代模型,在代码能力上全面提升,HumanEval-X 评测集的六种编程语言均大幅提升 (Python +57%, C++ +71%, Java +54%, JavaScript +83%, Go +56%, Rust +321%),在Python上达到 35.9% 的 Pass@1 一次通过率,超越规模更大的 StarCoder-15B。
  • 更优秀的模型特性:继承 ChatGLM2-6B 模型特性,CodeGeeX2-6B 更好支持中英文输入,支持最大 8192 序列长度,推理速度较一代 CodeGeeX-13B 大幅提升,量化后仅需6GB显存即可运行,支持轻量级本地化部署。
  • 更全面的AI编程助手:CodeGeeX插件(VS Code, Jetbrains)后端升级,支持超过100种编程语言,新增上下文补全、跨文件补全等实用功能。结合 Ask CodeGeeX 交互式AI编程助手,支持中英文对话解决各种编程问题,包括且不限于代码解释、代码翻译、代码纠错、文档生成等,帮助程序员更高效开发。
  • 更开放的协议:CodeGeeX2-6B 权重对学术研究完全开放,填写登记表申请商业使用。

使用教程

AI编程助手

我们开发了支持 VS Code、 IntelliJ IDEA、PyCharm、GoLand、WebStorm、Android Studio 等IDE的 CodeGeeX 插件。在插件中,可以更直接地体验到 CodeGeeX2 模型在代码生成与补全、添加注释、代码翻译及技术问答方面的能力为开发效率带来的提升。欢迎在IDE中下载 CodeGeeX 插件获得更加全面的AI编程体验,详情见CodeGeeX主页

快速开始

使用transformers快速调用CodeGeeX2-6B

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True, device='cuda')
model = model.eval()

# remember adding a language tag for better performance
prompt = "# language: Python\n# write a bubble sort function\n"
inputs = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_length=256, top_k=1)
response = tokenizer.decode(outputs[0])

>>> print(response)
# language: Python
# write a bubble sort function


def bubble_sort(list):
    for i in range(len(list) - 1):
        for j in range(len(list) - 1):
            if list[j] > list[j + 1]:
                list[j], list[j + 1] = list[j + 1], list[j]
    return list


print(bubble_sort([5, 2, 1, 8, 4]))

启动 Gradio DEMO:

python ./demo/run_demo.py

usage: run_demo.py [-h] [--model-path MODEL_PATH] [--example-path EXAMPLE_PATH] [--quantize QUANTIZE]
                   [--chatglm-cpp] [--fastllm] [--n-gpus N_GPUS] [--gpu GPU] [--cpu] [--auth] [--username yourname]
                   [--password yourpassword]
                   [--port PORT] [--listen ADDRESS]

# 若要启用身份验证,请先启用--auth,然后定义--username与--password,如:
python run_demo.py --auth --username user --password password  # 若要监听所有地址请指定 --listen 0.0.0.0

支持使用 ChatGLM.cpp 量化推理加速:

python ./demo/run_demo.py --quantize 4 --chatglm-cpp

启动FAST API:

python ./demo/fastapicpu.py
usage: fastapicpu.py [-h] [--model-path MODEL_PATH] [--listen ADDRESS] [--port PORT] [--workders NUM] [--cpu] [--half] [--quantize QUANTIZE] [--chatglm-cpp]
# --cpu启用cpu --half启用.half()

支持使用 ChatGLM.cpp 量化推理加速,同样添加 --quantize 4 --chatglm-cpp 参数即可。

API使用示例

curl -X POST "http://127.0.0.1:7860" \
    -H 'Content-Type: application/json' \
    -d '{"lang": "Python", "prompt": "# Write a quick sort function"}'

❗️请注意:

  • CodeGeeX2-6B 是一个基座代码生成模型,不具备聊天能力。请前往插件中体验更全面的 Ask CodeGeeX 聊天功能。

  • 在使用 CodeGeeX2-6B 的补全功能时,输入prompt需要遵循特定的格式以获得最好的效果。比如需要在开头加入编程语言标签(# language: Python,请查看完整语言列表),以注释的形式写prompt等。参考run_demo.py中的处理。

  • 如果显卡不支持bfloat16格式,将会输出错误的内容,需要将模型转换成float16格式:

    model = AutoModel.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True).half().cuda()
  • 如果需要使用多显卡加载模型,可以将以下代码:

    tokenizer = AutoTokenizer.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True)
    model = AutoModel.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True, device='cuda')
    model = model.eval()

    替换为

    def get_model():
        tokenizer = AutoTokenizer.from_pretrained("THUDM/codegeex2-6b", trust_remote_code=True)
        from gpus import load_model_on_gpus
        # gpus文件在demo文件夹中
        model = load_model_on_gpus("THUDM/codegeex2-6b", num_gpus=2)
        model = model.eval()
        return tokenizer, model
    
    tokenizer, model = get_model()

代码能力评测

CodeGeeX2 作为一个多语言代码生成基座模型,代码能力较上一代大幅提升,以下是在 HumanEval,HumanEval-X, DS1000 基准上的评测结果(评价指标 Pass@k 定义与论文中一致):

HumanEval (Pass@1,10,100)

Model Pass@1 Pass@10 Pass@100
CodeGen-16B-multi 19.2 34.6 55.2
CodeGeeX-13B 22.9 39.6 60.9
Codex-12B 28.8 46.8 72.3
CodeT5Plus-16B-mono 30.9 51.6 76.7
Code-Cushman-001 33.5 54.3 77.4
LLaMA-65B 23.7 - 79.3
LLaMA2-70B 29.9 - -
CodeGen2.5-7B-mono 33.4 58.4 82.7
StarCoder-15B 33.2 61.0 84.7
CodeGeeX2-6B 35.9 62.6 88.3

Pass@1 使用 n=20, t=0.2, top_p=0.95Pass@10,Pass@100 使用 n=200, t=0.8, top_p=0.95

HumanEval-X (Pass@1)

Model Python C++ Java JavaScript Go Rust Overall
CodeGen-16B-multi 19.2 18.1 15.0 18.4 13.0 1.8 14.2
CodeGeeX-13B 22.9 17.1 20.0 17.6 14.4 4.3 16.0
Replit-code-v1-3B 22.0 20.1 20.1 20.1 12.2 8.6 17.2
CodeGen2.5-7B-multi 30.6 24.3 29.0 27.5 18.9 20.1 25.1
StarCoder-15B 35.5 28.2 31.5 33.2 21.3 17.8 27.9
CodeGeeX2-6B 35.9 29.3 30.8 32.2 22.5 18.1 28.1

Pass@1 使用 n=20, t=0.2, top_p=0.95

以上结果可使用脚本scripts/run_humanevalx.sh复现。环境配置和说明参见评测环境

DS1000 (Pass@1)

Model Matplotlib Numpy Pandas Pytorch SciPy Scikit-learn TensorFlow Overall
# Samples 155 220 291 68 106 115 45 1000
CodeGen-16B-Mono 31.7 10.9 3.4 7.0 9.0 10.8 15.2 11.7
code-cushman-001 40.7 21.8 7.9 12.4 11.3 18.0 12.2 18.1
Codex-001 41.8 26.6 9.4 9.7 15.0 18.5 17.2 20.2
CodeGeeX2-6B 40.5 25.5 14.5 17.3 19.3 24.0 23.0 23.1
StarCoder-15B 51.7 29.7 11.4 21.4 20.2 29.5 24.5 26.0
Codex-002 57.0 43.1 26.5 41.8 31.8 44.8 39.3 39.2

Pass@1 使用 n=40, t=0.2, top_p=0.5

以上结果可使用DS1000评测代码复现。

量化推理性能

CodeGeeX2 与上一代相比,对部署更加友好。得益于使用 Multi-Query Attention 和 Flash Attention,推理速度更快,且量化后仅需6GB显存即可运行:

量化

Model FP16/BF16 INT8 INT4
CodeGeeX-13B 26.9 GB 14.7 GB -
CodeGeeX2-6B 13.1 GB 8.2 GB 5.5 GB

基于 PyTorch 2.0 测试,利用torch.nn.functional.scaled_dot_product_attention实现高效的 Attention 计算。

推理

Model 推理速度 (字符/秒)
CodeGeeX-13B 32
CodeGeeX2-6B 94

batch_size=1, max_length=2048,均使用加速框架,测试硬件为GeForce RTX-3090

协议

本仓库的代码依照 Apache-2.0 协议开源,模型的权重的使用则需要遵循 Model License。CodeGeeX2-6B 权重对学术研究完全开放,填写登记表申请商业使用。

引用

如果觉得我们的工作有帮助,欢迎引用以下论文:

@inproceedings{zheng2023codegeex,
      title={CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X},
      author={Qinkai Zheng and Xiao Xia and Xu Zou and Yuxiao Dong and Shan Wang and Yufei Xue and Zihan Wang and Lei Shen and Andi Wang and Yang Li and Teng Su and Zhilin Yang and Jie Tang},
      booktitle={KDD},
      year={2023}
}

More Repositories

1

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Python
39,038
star
2

ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Python
15,431
star
3

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Python
11,719
star
4

CodeGeeX

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Python
7,733
star
5

GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Python
7,599
star
6

CogVLM

a state-of-the-art-level open visual language model | 多模态预训练模型
Python
4,870
star
7

VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Python
3,973
star
8

CogVideo

Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
Python
3,487
star
9

GLM

GLM (General Language Model)
Python
3,006
star
10

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Python
1,875
star
11

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Python
1,836
star
12

CogDL

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Python
1,693
star
13

CogView

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Python
1,589
star
14

WebGLM

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)
Python
1,499
star
15

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs
Python
1,209
star
16

CogView2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
Python
928
star
17

ImageReward

[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Python
921
star
18

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Python
883
star
19

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Python
842
star
20

GATNE

Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Python
511
star
21

LongBench

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Python
463
star
22

CogQA

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Python
454
star
23

GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Python
413
star
24

GCC

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020
Python
315
star
25

MathGLM

Official Pytorch Implementation for MathGLM
Python
305
star
26

HGB

Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.
Python
285
star
27

ComiRec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
Python
268
star
28

KOBE

Towards Knowledge-Based Personalized Product Description Generation in E-commerce @ KDD 2019
Python
236
star
29

NLP4Rec-Papers

Paper list of NLP for recommender systems
227
star
30

ProNE

Source code and dataset for IJCAI 2019 paper "ProNE: Fast and Scalable Network Representation Learning"
Python
221
star
31

RelayDiffusion

The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
Python
220
star
32

Chinese-Transformer-XL

Python
212
star
33

GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
Python
202
star
34

AlignBench

多维度中文对齐评测基准 | Benchmarking Chinese Alignment of LLMs
Python
184
star
35

icetk

A unified tokenization tool for Images, Chinese and English.
Python
145
star
36

AutoWebGLM

Python
140
star
37

KBRD

Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Python
133
star
38

iPrompt

Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting
Python
120
star
39

CogCoM

Python
119
star
40

MCNS

Source code and dataset for KDD 2020 paper "Understanding Negative Sampling in Graph Representation Learning"
Python
111
star
41

GraphMAE2

GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner in WWW'23
Python
105
star
42

LongAlign

LongAlign: A Recipe for Long Context Alignment Encompassing Data, Training, and Evaluation
Python
105
star
43

ProteinLM

Protein Language Model
Python
102
star
44

grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Python
89
star
45

GraphSGAN

Implementation of "GraphSGAN", a GAN-based semi-supervised learning algorithm for graph data.
Python
84
star
46

kgTransformer

kgTransformer: pre-training for reasoning over complex KG queries (KDD 22)
Python
83
star
47

ScenarioMeta

Source code and dataset for KDD 2019 paper "Sequential Scenario-Specific Meta Learner for Online Recommendation"
Python
81
star
48

OAG-BERT

A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)
76
star
49

CogKR

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Python
70
star
50

FewNLU

Python
67
star
51

SelfKG

Codes for WWW2022 accepted paper: SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
Python
65
star
52

XDAI

Python
62
star
53

Multilingual-GLM

The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective
Python
62
star
54

OAG

Source code and dataset for KDD 2019 paper "OAG: Toward Linking Large-scale Heterogeneous Entity Graphs"
Python
61
star
55

SciGLM

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning
Python
53
star
56

Graph-Reading-Group

Daily reading group on graphs at KEG
45
star
57

CogAgent

38
star
58

SCR

SCR: Training Graph Neural Networks with Consistency Regularization
Python
36
star
59

FastLDM

Inference speed-up for stable-diffusion (ldm) with TensorRT.
Python
34
star
60

KDD-Industrial-Papers

A list of recent industrial papers in KDD'16–'18
30
star
61

WhoIsWho

KDD'23 Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark, Leaderboard, and Toolkit
Python
29
star
62

GraphCAD

TKDE'22-GraphCAD: https://arxiv.org/pdf/2108.07516.pdf
Python
29
star
63

GRAND-plus

Code and dataset for paper "GRAND+: Scalable Graph Random Neural Networks"
Python
29
star
64

ChatGLM-Math

Python
28
star
65

GIAAD

Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track
Python
22
star
66

Tsinghua-ML-Course

Course Materials for ML Course at Tsinghua
HTML
22
star
67

GLM-iprompt

Apply Iprompt on GLM with innovative new methods. Currently support Chinese QA, English QA and Chinese poem generation.
Python
21
star
68

ApeGNN

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
Python
19
star
69

HOSMEL

A task relevant entity linking toolkit
Python
17
star
70

tdgia

code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)
Python
17
star
71

MRT

MRT: Tracing the Evolution of Scientific Publications (TKDE 2021)
17
star
72

eTrust

Source code and dataset for TKDE 2019 paper “Trust Relationship Prediction in Alibaba E-Commerce Platform”
C++
16
star
73

BatchSampler

The source code for BatchSampler that accepted in KDD'23
Python
16
star
74

LargeScale

Python
15
star
75

Efficient-Head-Finetuning

Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Python
13
star
76

IGB

Source code and dataset for IJCAI 2022 paper "Rethinking the Setting of Semi-supervised Learning on Graphs"
Python
12
star
77

Self-Contrast

Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
Python
11
star
78

paper-source-trace

Python
8
star
79

citation-prediction

Python
8
star
80

RecDCL

RecDCL: Dual Contrastive Learning for Recommendation (WWW'24, Oral)
Python
8
star
81

Refined-cora-citeseer

6
star
82

scholar-profiling

Jupyter Notebook
6
star
83

DropConn

DropConn: Dropout Connection Based Random GNNs for Molecular Property Prediction (TKDE'24)
Python
5
star
84

OAG-entity-alignment

Python
5
star
85

STAM

Source code and dataset for WWW'22 paper "STAM: A Spatiotemporal Aggregation Method for Graph Neural Network-based Recommendation"
Python
4
star
86

whoiswho-top-solutions

Python
4
star
87

Paper-Rec

Python
3
star
88

DistAlign-GNNs

Jupyter Notebook
3
star
89

open_clip_pix2struct

pix2struct version of open_clip
Jupyter Notebook
3
star
90

RecNS

Source code and dataset for TKDE'22 paper "Region or Global? A Principle for Negative Sampling in Graph-based Recommendation"
Python
3
star
91

tot-prediction

Python
2
star
92

Reviewer-Rec

Python
2
star
93

oag-author-tagging

Python
2
star
94

OAG-taxo

Python
1
star