• Stars
    star
    8,150
  • Rank 4,554 (Top 0.09 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)

🏠 Homepage | 📖 Blog | 🪧 DEMO | 🤖 Download Model | 📄 Paper | 🌐 中文

🛠 VS Code, Jetbrains, Cloud Studio supported | 👋 Join our Discord, Slack, Telegram, WeChat

Cloud Studio Template

CodeGeeX: A Multilingual Code Generation Model

We introduce CodeGeeX, a large-scale multilingual code generation model with 13 billion parameters, pre-trained on a large code corpus of more than 20 programming languages. As of June 22, 2022, CodeGeeX has been trained on more than 850 billion tokens on a cluster of 1,536 Ascend 910 AI Processors. CodeGeeX has several unique features:

  • Multilingual Code Generation: CodeGeeX has good performance for generating executable programs in several mainstream programming languages, including Python, C++, Java, JavaScript, Go, etc. DEMO
  • Crosslingual Code Translation: CodeGeeX supports the translation of code snippets between different languages. Simply by one click, CodeGeeX can transform a program into any expected language with a high accuracy. DEMO
  • Customizable Programming Assistant: CodeGeeX is available in the VS Code extension marketplace for free. It supports code completion, explanation, summarization and more, which empower users with a better coding experience. VS Code Extension
  • Open-Source and Cross-Platform: All codes and model weights are publicly available for research purposes. CodeGeeX supports both Ascend and NVIDIA platforms. It supports inference in a single Ascend 910, NVIDIA V100 or A100. Apply Model Weights

HumanEval-X for Realistic Multilingual Benchmarking. To help standardize the evaluation of multilingual code generation and translation, we develop and release the HumanEval-X Benchmark. HumanEval-X is a new multilingual benchmark that contains 820 human-crafted coding problems in 5 programming languages (Python, C++, Java, JavaScript, and Go), each of these problems is associated with tests and solutions. Usage 🤗 Available in HuggingFace

CodeGeeX achieves the highest average performance compared with other open-sourced multilingual baselines.

News

  • 2023-03-30: CodeGeeX paper is now available at arxiv.

  • 2023-02-14: CodeGeeX now supports Cloud Studio, a fantastic web IDE from Tencent. Click on the badge on top of this page to quickly launch an environment to test CodeGeeX.

  • 2023-02-13: Thanks a lot to OneFlow team for adding oneflow backend for CodeGeeX's inference (Even faster than FasterTransformer under FP16!). Check more details here.

  • 🌟 2023-02: We are hosting CodeGeeX "Coding With AI" Hackathon, design cool applications based on CodeGeeX and win prizes (RTX 4090, DJI drone, etc)!

  • 2022-12-31: We release the FasterTransformer version of CodeGeeX in codegeex-fastertransformer. The INT8 accelerated version reaches an a verage speed of <15ms/token. Happy new year to everyone!

  • 2022-12-13: We release the source code of CodeGeeX VS Code extension in codegeex-vscode-extension. Follow QuickStart to start development.

  • 2022-12-11: CodeGeeX is now available for Jetbrains IDEs (IntelliJ IDEA, PyCharm, GoLand, CLion, etc), download it here.

  • 2022-12-04: We release source code of quantization (requires less GPU RAM: 27GB -> 15GB) and model parallelism (possible to run on multiple GPUs with <8G RAM).

  • 2022-09-30: We release the cross-platform source code and models weights for both Ascend and NVIDIA platforms.

Getting Started

CodeGeeX is initially implemented in Mindspore and trained Ascend 910 AI Processors. We provide a torch-compatible version based on Megatron-LM to facilitate usage on GPU platforms.

Installation

Python 3.7+ / CUDA 11+ / PyTorch 1.10+ / DeepSpeed 0.6+ are required. Install codegeex package via:

git clone [email protected]:THUDM/CodeGeeX.git
cd CodeGeeX
pip install -e .

Or use CodeGeeX docker to quickly set up the environment (with nvidia-docker installed):

docker pull codegeex/codegeex:latest
# To enable GPU support, clarify device ids with --device
docker run --gpus '"device=0,1"' -it --ipc=host --name=codegeex codegeex/codegeex

Model Weights

Apply and download model weights through this link. You'll receive by mail urls.txt that contains temporary download links. We recommend you to use aria2 to download it via the following command (Please make sure you have enough disk space to download the checkpoint (~26GB)):

aria2c -x 16 -s 16 -j 4 --continue=true -i urls.txt 

Run the following command to get the full model weights:

cat codegeex_13b.tar.gz.* > codegeex_13b.tar.gz
tar xvf codegeex_13b.tar.gz

Inference on GPUs

Have a try on generating the first program with CodeGeeX. First, specify the path of the model weights in configs/codegeex_13b.sh. Second, write the prompt (natural language description or code snippet) into a file, e.g., tests/test_prompt.txt, then run the following script:

# On a single GPU (with more than 27GB RAM)
bash ./scripts/test_inference.sh <GPU_ID> ./tests/test_prompt.txt

# With quantization (with more than 15GB RAM)
bash ./scripts/test_inference_quantized.sh <GPU_ID> ./tests/test_prompt.txt

# On multiple GPUs (with more than 6GB RAM, need to first convert ckpt to MP_SIZE partitions)
bash ./scripts/convert_ckpt_parallel.sh <LOAD_CKPT_PATH> <SAVE_CKPT_PATH> <MP_SIZE>
bash ./scripts/test_inference_parallel.sh <MP_SIZE> ./tests/test_prompt.txt

VS Code and Jetbrains Extension Guidance

Based on CodeGeeX, we also develop free extentions for VS Code and Jetbrains IDEs, and more in the future.

For VS Code, search "codegeex" in Marketplace or install it here. Detailed instructions can be found in VS Code Extension Guidance. For developers, we have also released the source code in codegeex-vscode-extension, please follow QuickStart to start development.

For Jetbrains IDEs, search "codegeex" in Plugins or install it here. Make sure your IDE version is 2021.1 or later. CodeGeeX now supports IntelliJ IDEA, PyCharm, GoLand, CLion, Android Studio, AppCode, Aqua, DataSpell, DataGrip, Rider, RubyMine, and WebStorm.

CodeGeeX: Architecture, Code Corpus, and Implementation

Architecture: CodeGeeX is a large-scale pre-trained programming language model based on transformers. It is a left-to-right autoregressive decoder, which takes code and natural language as input and predicts the probability of the next token. CodeGeeX contains 40 transformer layers with a hidden size of 5,120 for self-attention blocks and 20,480 for feed-forward layers, making its size reach 13 billion parameters. It supports a maximum sequence length of 2,048.

Left: the proportion of programming languages in CodeGeeX's training data. Right: the plot of training loss against the training steps of CodeGeeX.

Code Corpus: Our training data contains two parts. The first part is from open-sourced code datasets, The Pile and CodeParrot. The Pile contains a subset of code corpus that collects public repositories with more than 100 stars from GitHub, from which we select codes in 23 popular programming languages. The second part is supplementary data directly scrapped from the public GitHub repositories that do not appear in previous datasets, including Python, Java and C++. To obtain data of potentially higher quality, repositories with at least one star and its size smaller than 10MB are chosen. A file is filtered out if it 1) has more than 100 characters per line on average, 2) is automatically generated, 3) has a ratio of alphabet less than 40%, or 4) is bigger than 100KB or smaller than 1KB. To help the model distinguish different languages, we add a language-specific prefix at the beginning of each segment in the form of [Comment sign] language: [LANG], e.g., # language: Python. For tokenization, we use the same tokenizer as GPT-2 and process whitespaces as extra tokens, resulting in a vocabulary of 50,400 tokens. In total, the code corpus has 23 programming languages with 158.7B tokens.

Training: We implement CodeGeeX in Mindspore 1.7 and train it on 1,536 Ascend 910 AI Processor (32GB). The model weights are under FP16 format, except that we use FP32 for layer-norm and softmax for higher precision and stability. The entire model consumes about 27GB of memory. To increase the training efficiency, we adopt an 8-way model parallel training together with 192-way data parallel training, with ZeRO-2 optimizer enabled. The micro-batch size is 16 and the global batch size reaches 3,072. Moreover, we adopt techniques to further boost the training efficiency including the element-wise operator fusion, fast gelu activation, matrix multiplication dimension optimization, etc. The entire training process takes nearly two months, spanning from April 18 to June 22, 2022, during which 850B tokens were passed for training, i.e., 5+ epochs.

HumanEval-X: A new benchmark for Multilingual Program Synthesis

To better evaluate the multilingual ability of code generation models, we propose a new benchmark HumanEval-X. While previous works evaluate multilingual program synthesis under semantic similarity (e.g., CodeBLEU) which is often misleading, HumanEval-X evaluates the functional correctness of the generated programs. HumanEval-X consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks.

An illustration of tasks supported by HumanEval-X. Declarations, docstrings, and solutions are marked with red, green, and blue respectively. Code generation uses declaration and docstring as input, to generate solution. Code translation uses declaration in both languages and translate the solution in source language to the one in target language.

In HumanEval-X, every sample in each language contains declaration, docstring, and solution, which can be combined in various ways to support different downstream tasks including generation, translation, summarization, etc. We currently focus on two tasks: code generation and code translation. For code generation, the model uses declaration and docstring as input to generate the solution. For code translation, the model uses declarations in both languages and the solution in the source language as input, to generate solutions in the target language. We remove the description during code translation to prevent the model from directly solving the problem. For both tasks, we use the unbiased pass@k metric proposed in Codex: $\text{pass}@k:= \mathbb{E}[1-\frac{\tbinom{n-c}{k}}{\tbinom{n}{k}}]$, with $n=200$ and $k\in(1,10,100)$.

Multilingual Code Generation

Left: the detailed pass@k (k=1,10,100) performance on code generation task for five languages in HumanEval-X. Right: the average performance of all languages of each model. CodeGeeX achieves the highest average performance compared with InCoder-6.7B, CodeGen-Multi-6B and CodeGen-Multi-16B.

We compare CodeGeeX with two other open-sourced code generation models, InCoder (from Meta) and CodeGen (from Salesforce). Specifically, InCoder-6.7B, CodeGen-Multi-6B and CodeGen-Multi-16B are considered. CodeGeeX significantly outperforms models with smaller scales (by 7.5%~16.3%) and is competitive with CodeGen-Multi-16B with a larger scale (average performance 54.76% vs. 54.39%). CodeGeeX achieves the best average performance across languages.

Crosslingual Code Translation

Results on HumanEval-X code translation task. Best language-wise performance are bolded.

We also evaluate the performance of translation across different programming languages. We test the zero-shot performance of CodeGeeX, as well as the fine-tuned CodeGeeX-13B-FT (fine-tuned using the training set of code translation tasks in XLCoST; Go is absent in the original set, we thus add a small set to it). The results indicate that models have a preference for languages, e.g., CodeGeeX is good at translating other languages to Python and C++, while CodeGen-Multi-16B is better at translating to JavaScript and Go; these could probably be due to the difference in language distribution in the training corpus. Among 20 translation pairs, we also observe that the performance of A-to-B and B-to-A are always negatively correlated, which might indicate that the current models are still not capable of learning all languages well.

How to use HumanEval-X and contribute to it?

For more details on how to use HumanEval-X, please see usage. We highly welcome the community to contribute to HumanEval-X by adding more problems or extending it to other languages, please check out the standard format of HumanEval-X and add a pull request.

Please kindly let us know if you have any comment or suggestion, via [email protected].

Examples of Generation
Acknowledgement
This project is supported by the National Science Foundation for Distinguished Young Scholars (No. 61825602).

Lead Contributors

Qinkai Zheng (Tsinghua KEG), Xiao Xia (Tsinghua KEG), Xu Zou (Tsinghua KEG)

Contributors

Tsinghua KEG---The Knowledge Engineering Group at Tsinghua: Aohan Zeng, Wendi Zheng, Lilong Xue

Zhilin Yang's Group at Tsinghua IIIS: Yifeng Liu, Yanru Chen, Yichen Xu (BUPT, work was done when visiting Tsinghua)

Peng Cheng Laboratory: Qingyu Chen, Zhongqi Li, Gaojun Fan

Zhipu.AI: Yufei Xue, Shan Wang, Jiecai Shan, Haohan Jiang, Lu Liu, Xuan Xue, Peng Zhang

Ascend and Mindspore Team: Yifan Yao, Teng Su, Qihui Deng, Bin Zhou

Data Annotations

Ruijie Cheng (Tsinghua), Peinan Yu (Tsinghua), Jingyao Zhang (Zhipu.AI), Bowen Huang (Zhipu.AI), Shaoyu Wang (Zhipu.AI)

Advisors

Zhilin Yang (Tsinghua IIIS), Yuxiao Dong (Tsinghua KEG), Wenguang Chen (Tsinghua PACMAN), Jie Tang (Tsinghua KEG)

Computation Sponsors

Peng Cheng Laboratory

Zhipu.AI---an AI startup that aims to teach machines to think like humans

Project Leader

Jie Tang (Tsinghua KEG & BAAI)

License

Our code is licensed under the Apache-2.0 license. Our model is licensed under the license.

Citation

If you find our work useful, please cite:

@misc{zheng2023codegeex,
      title={CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X}, 
      author={Qinkai Zheng and Xiao Xia and Xu Zou and Yuxiao Dong and Shan Wang and Yufei Xue and Zihan Wang and Lei Shen and Andi Wang and Yang Li and Teng Su and Zhilin Yang and Jie Tang},
      year={2023},
      eprint={2303.17568},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

More Repositories

1

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Python
40,459
star
2

ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Python
15,702
star
3

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Python
13,366
star
4

CogVideo

text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Python
7,976
star
5

GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Python
7,653
star
6

CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model
Python
7,622
star
7

CogVLM

a state-of-the-art-level open visual language model | 多模态预训练模型
Python
5,913
star
8

GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Python
4,826
star
9

VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Python
4,076
star
10

GLM

GLM (General Language Model)
Python
3,168
star
11

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Python
2,144
star
12

CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Python
2,018
star
13

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Python
1,968
star
14

CogDL

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Python
1,720
star
15

CogView

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Python
1,691
star
16

WebGLM

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)
Python
1,557
star
17

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs
Python
1,339
star
18

CodeGeeX4

CodeGeeX4-ALL-9B, a versatile model for all AI software development scenarios, including code completion, code interpreter, web search, function calling, repository-level Q&A and much more.
Python
1,271
star
19

ImageReward

[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Python
1,117
star
20

LongWriter

LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
Python
1,076
star
21

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Python
966
star
22

CogView2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
Python
944
star
23

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Python
915
star
24

LongBench

[ACL 2024] LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Python
629
star
25

AutoWebGLM

An LLM-based Web Navigating Agent (KDD'24)
Python
584
star
26

GATNE

Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Python
522
star
27

GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Python
462
star
28

CogQA

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Python
456
star
29

Inf-DiT

Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Python
366
star
30

GCC

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020
Python
322
star
31

MathGLM

Official Pytorch Implementation for MathGLM
Python
316
star
32

HGB

Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.
Python
301
star
33

AlignBench

大模型多维度中文对齐评测基准 (ACL 2024)
Python
295
star
34

ComiRec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
Python
278
star
35

LongCite

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
Python
272
star
36

RelayDiffusion

The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
Python
262
star
37

KOBE

Towards Knowledge-Based Personalized Product Description Generation in E-commerce @ KDD 2019
Python
237
star
38

NLP4Rec-Papers

Paper list of NLP for recommender systems
225
star
39

ProNE

Source code and dataset for IJCAI 2019 paper "ProNE: Fast and Scalable Network Representation Learning"
Python
225
star
40

Chinese-Transformer-XL

Python
218
star
41

GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
Python
203
star
42

LongAlign

[EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs
Python
199
star
43

icetk

A unified tokenization tool for Images, Chinese and English.
Python
150
star
44

CogCoM

Jupyter Notebook
146
star
45

ReST-MCTS

ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)
Python
146
star
46

KBRD

Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Python
134
star
47

GraphMAE2

GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner in WWW'23
Python
133
star
48

iPrompt

Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting
Python
121
star
49

ProteinLM

Protein Language Model
Python
111
star
50

MCNS

Source code and dataset for KDD 2020 paper "Understanding Negative Sampling in Graph Representation Learning"
Python
111
star
51

VisualAgentBench

Towards Large Multimodal Models as Visual Foundation Agents
Python
94
star
52

CogView3

text to image to generation: CogView3-Plus and CogView3(ECCV 2024)
Python
93
star
53

grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Python
91
star
54

GraphSGAN

Implementation of "GraphSGAN", a GAN-based semi-supervised learning algorithm for graph data.
Python
85
star
55

kgTransformer

kgTransformer: pre-training for reasoning over complex KG queries (KDD 22)
Python
83
star
56

ScenarioMeta

Source code and dataset for KDD 2019 paper "Sequential Scenario-Specific Meta Learner for Online Recommendation"
Python
80
star
57

OAG-BERT

A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)
76
star
58

ChatGLM-Math

Python
75
star
59

CogKR

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Python
71
star
60

SelfKG

Codes for WWW2022 accepted paper: SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
Python
67
star
61

FewNLU

Python
65
star
62

SciGLM

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)
Python
62
star
63

Multilingual-GLM

The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective
Python
62
star
64

XDAI

Python
61
star
65

CogAgent

59
star
66

OAG

Source code and dataset for KDD 2019 paper "OAG: Toward Linking Large-scale Heterogeneous Entity Graphs"
Python
59
star
67

NaturalCodeBench

Python
54
star
68

LVBench

LVBench: An Extreme Long Video Understanding Benchmark
Python
52
star
69

AutoRE

Python
45
star
70

Graph-Reading-Group

Daily reading group on graphs at KEG
44
star
71

SCR

SCR: Training Graph Neural Networks with Consistency Regularization
Python
37
star
72

WhoIsWho

KDD'23 Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark, Leaderboard, and Toolkit
Python
34
star
73

FastLDM

Inference speed-up for stable-diffusion (ldm) with TensorRT.
Python
34
star
74

GraphCAD

TKDE'22-GraphCAD: https://arxiv.org/pdf/2108.07516.pdf
Python
30
star
75

GRAND-plus

Code and dataset for paper "GRAND+: Scalable Graph Random Neural Networks"
Python
30
star
76

KDD-Industrial-Papers

A list of recent industrial papers in KDD'16–'18
28
star
77

ApeGNN

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
Python
23
star
78

GLM-iprompt

Apply Iprompt on GLM with innovative new methods. Currently support Chinese QA, English QA and Chinese poem generation.
Python
21
star
79

GIAAD

Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track
Python
21
star
80

Tsinghua-ML-Course

Course Materials for ML Course at Tsinghua
HTML
21
star
81

HOSMEL

A task relevant entity linking toolkit
Python
20
star
82

Self-Contrast

Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
Python
19
star
83

RecDCL

RecDCL: Dual Contrastive Learning for Recommendation (WWW'24, Oral)
Python
19
star
84

tdgia

code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)
Python
18
star
85

BatchSampler

The source code for BatchSampler that accepted in KDD'23
Python
18
star
86

MRT

MRT: Tracing the Evolution of Scientific Publications (TKDE 2021)
16
star
87

LargeScale

Python
15
star
88

eTrust

Source code and dataset for TKDE 2019 paper “Trust Relationship Prediction in Alibaba E-Commerce Platform”
C++
15
star
89

MSAGPT

MSAGPT
Python
15
star
90

whoiswho-top-solutions

Python
14
star
91

paper-source-trace

Python
14
star
92

Efficient-Head-Finetuning

Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Python
13
star
93

IGB

Source code and dataset for IJCAI 2022 paper "Rethinking the Setting of Semi-supervised Learning on Graphs"
Python
10
star
94

BattleAgentBench

Python
9
star
95

GraphAlign

GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment
Python
8
star
96

APAR

APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
Python
8
star
97

scholar-profiling

Jupyter Notebook
7
star
98

citation-prediction

Python
7
star
99

OpenWebAgent

A convenient framework for developing LLM- and LMM-based web agents.
JavaScript
6
star
100

OAG-AQA

Python
6
star