• Stars
    star
    1,589
  • Rank 28,430 (Top 0.6 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 3 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".

Generate vivid Images for Any (Chinese) text

teaser

News! The paper of ImageReward is accepted by NeurIPS 2023!

News! The codes of ImageReward (paper link) have been released at https://github.com/THUDM/ImageReward! ImageReward is the first general-purpose text-to-image human preference RM.

News! The codes of CogView2 (paper link) have been released at https://github.com/THUDM/CogView2!

News! The demo for a better and faster CogView2 (formal version, March 2022) is available! The lastest model also supports English input, but to translate them into Chinese often could be better.

News! The demo for a better and faster CogView2 (new version) is available!

News! The paper of CogView is accepted by NeurIPS 2021!

CogView is a pretrained (4B-param) transformer for text-to-image generation in general domain.

  • Read our paper CogView: Mastering Text-to-Image Generation via Transformers on ArXiv for a formal introduction. The PB-relax and Sandwich-LN can also help you train large and deep transformers stably (e.g. eliminating NaN losses).
  • Visit our demo at Github Page or Wudao! (Without post-selection or super-resolution, currently only supports simplified Chinese input, but one can translate text from other languages into Chinese for input. Note: Wudao provides faster access for users from China mainland.)
  • Download our pretrained models from Tsinghua Cloud.
  • Cite our paper if you find our work is helpful~
@article{ding2021cogview,
  title={CogView: Mastering Text-to-Image Generation via Transformers},
  author={Ding, Ming and Yang, Zhuoyi and Hong, Wenyi and Zheng, Wendi and Zhou, Chang and Yin, Da and Lin, Junyang and Zou, Xu and Shao, Zhou and Yang, Hongxia and Tang, Jie},
  journal={arXiv preprint arXiv:2105.13290},
  year={2021}
  • Google Colab Two contributors successfully setup up CogView on Colab Links to Colab!

Getting Started

Setup

  • Hardware: Linux servers with Nvidia V100s or A100s are recommended, but it is also okay to run the pretrained models with smaller --max-inference-batch-size or training smaller models on less powerful GPUs.

  • Environment (Option 1): Please first install PyTorch (>=1.7.0) and apex, and then install other dependencies via pip install -r requirements.txt.

  • Environment (Option 2): We prepare a docker image in case that you fail to handle the environments. Pull the image, create a (background) container and get into it via:

    docker pull cogview/cuda111_torch181_deepspeed040
    ./env/start_docker.sh && docker exec -it bg-cogview bash
    
    cd /root/cogview # in the container
    

Download

  1. Download the image tokenizer vqvae_hard_biggerset_011.pt from BAAI website or Tsinghua Cloud. Place the file under pretrained/vqvae.
wget 'https://cloud.tsinghua.edu.cn/f/71607a5dca69417baa8c/?dl=1' -O pretrained/vqvae/vqvae_hard_biggerset_011.pt
  1. Download models from Project Wudao-Wenhui.

    FileName Discription
    cogview-base.tar The pretrained text-to-image model.
    cogview-caption.tar Finetuned image-to-text model, also used for reranking.
    cogview-sr.tar Finetuned super-resolution model. (warning: it runs slow.)

    Uncompress them into pretrained/cogview/. The following command should be modified based on the model name.

    tar -xvf cogview-{base, sr, caption}.tar -C pretrained/cogview/
    
  2. (Only for training tutorial, skip it for inference.) Download a small "bird-and-animal" example dataset from our link at Tsinghua Cloud.

wget https://cloud.tsinghua.edu.cn/f/1e4963ec8ac84941ba68/?dl=1 -O data/bird_animal.bin

Run CogView! (Model Inference)

We encapsulate the generation functions into scripts. See generate_samples.py and arguments.py for details.

Text-to-Image Generation

Write text queries (one per line) into input.txt and run:

./scripts/text2image.sh --debug

The results will in a new folder samples_text2image/.

Arguments useful in inference are mainly:

  • --input-source [path or "interactive"]. The path of the input file, can also be "interactive", which will launch a CLI.
  • --output-path [path]. The folder containing the results.
  • --batch-size [int]. The number of samples will be generated per query.
  • --max-inference-batch-size [int]. Maximum batch size per forward. Reduce it if OOM.
  • --debug. Only save concatenated images for all generated samples, and name them by input text and date.
  • --with-id. When it toggled, you must specify an "id" before each input, e.g. 001\t一个漂亮的女孩, \t denoting TAB (NOT space). It will generate batch-size split images in a folder named "id" for each input. Confict with --debug.
  • --device [int]. Running on which GPU.

Super-resolution

Run the following script and input text\t{image_path}, where {image_path} means the path of a previously generated image.

./scripts/super_resolution.sh

Note: It is only effective for generated images from our Image Tokenizer (due to the token distribution).

Image-to-Text

The input is "one image path per line", and will print the results to stdout.

./scripts/image2text.sh

Note: Not optimized for this task, so it might not very competitive (but okay). We will consider to release a version funetuning for a longer period on this task in the future. (TODO)

Post-selection

This application only takes file inputs, where each line is {text}\t{image_path1}\t{image_path2}\t{image_path3}.... The output is {output_path}/scores.txt, a line of a list of scores, following a line from inputs.

./scripts/post_selection.sh

Note: In the released codes, for simplicity, we did not expose the raw API , which supports some advanced generation modes, e.g. text and part of image.

Training

Here we use a subset of our dataset from bird-and-animal for tutorial. The binary dataset is generated by our cogdata toolkit. Please wait for a formal release with tutorials of cogdata (although it is available now).

Single Node

After downloading the dataset, directly run

./scripts/pretrain_single_node.sh

Multiple Nodes

If you want to train the models on multiple servers inter-connected by infiniband without a shared file system (you may need pdsh to accelerate this process):

  1. On each server, use git clone to download this repo, and make sure the data (LMDB format) are moved into the data subfolder.
  2. On each server, echo "ip1 ip2 <other IPs>" > ./docker/ip_list.txt, and then start the docker by ./env/start_docker.sh.
  3. Get into the docker on the first node container via docker exec -it bg-cogview bash.
  4. Get into /root/cogview and run ./scripts/pretrain_multiple_nodes.sh. You may need to change the config (especially OPTIONS_NCCL) in the shell script.

See the arguments.py for advanced functions for training. TODO

more_samples

More Repositories

1

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Python
39,038
star
2

ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Python
15,431
star
3

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Python
11,719
star
4

CodeGeeX

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Python
7,733
star
5

GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Python
7,599
star
6

CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model
Python
7,022
star
7

CogVLM

a state-of-the-art-level open visual language model | 多模态预训练模型
Python
4,870
star
8

VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Python
3,973
star
9

CogVideo

Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
Python
3,487
star
10

GLM

GLM (General Language Model)
Python
3,006
star
11

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Python
1,875
star
12

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Python
1,836
star
13

CogDL

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Python
1,693
star
14

WebGLM

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)
Python
1,499
star
15

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs
Python
1,209
star
16

CogView2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
Python
928
star
17

ImageReward

[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Python
921
star
18

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Python
883
star
19

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Python
842
star
20

GATNE

Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Python
511
star
21

LongBench

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Python
463
star
22

CogQA

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Python
454
star
23

GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Python
413
star
24

GCC

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020
Python
315
star
25

MathGLM

Official Pytorch Implementation for MathGLM
Python
305
star
26

HGB

Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.
Python
285
star
27

ComiRec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
Python
268
star
28

KOBE

Towards Knowledge-Based Personalized Product Description Generation in E-commerce @ KDD 2019
Python
236
star
29

NLP4Rec-Papers

Paper list of NLP for recommender systems
227
star
30

ProNE

Source code and dataset for IJCAI 2019 paper "ProNE: Fast and Scalable Network Representation Learning"
Python
224
star
31

RelayDiffusion

The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
Python
220
star
32

Chinese-Transformer-XL

Python
212
star
33

GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
Python
202
star
34

AlignBench

多维度中文对齐评测基准 | Benchmarking Chinese Alignment of LLMs
Python
184
star
35

icetk

A unified tokenization tool for Images, Chinese and English.
Python
145
star
36

AutoWebGLM

Python
140
star
37

KBRD

Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Python
133
star
38

iPrompt

Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting
Python
120
star
39

CogCoM

Python
119
star
40

MCNS

Source code and dataset for KDD 2020 paper "Understanding Negative Sampling in Graph Representation Learning"
Python
111
star
41

GraphMAE2

GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner in WWW'23
Python
105
star
42

LongAlign

LongAlign: A Recipe for Long Context Alignment Encompassing Data, Training, and Evaluation
Python
105
star
43

ProteinLM

Protein Language Model
Python
102
star
44

grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Python
89
star
45

GraphSGAN

Implementation of "GraphSGAN", a GAN-based semi-supervised learning algorithm for graph data.
Python
84
star
46

kgTransformer

kgTransformer: pre-training for reasoning over complex KG queries (KDD 22)
Python
83
star
47

ScenarioMeta

Source code and dataset for KDD 2019 paper "Sequential Scenario-Specific Meta Learner for Online Recommendation"
Python
81
star
48

OAG-BERT

A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)
76
star
49

CogKR

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Python
70
star
50

FewNLU

Python
67
star
51

SelfKG

Codes for WWW2022 accepted paper: SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
Python
65
star
52

XDAI

Python
62
star
53

Multilingual-GLM

The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective
Python
62
star
54

OAG

Source code and dataset for KDD 2019 paper "OAG: Toward Linking Large-scale Heterogeneous Entity Graphs"
Python
61
star
55

SciGLM

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning
Python
53
star
56

Graph-Reading-Group

Daily reading group on graphs at KEG
45
star
57

CogAgent

38
star
58

SCR

SCR: Training Graph Neural Networks with Consistency Regularization
Python
36
star
59

FastLDM

Inference speed-up for stable-diffusion (ldm) with TensorRT.
Python
34
star
60

KDD-Industrial-Papers

A list of recent industrial papers in KDD'16–'18
30
star
61

WhoIsWho

KDD'23 Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark, Leaderboard, and Toolkit
Python
29
star
62

GraphCAD

TKDE'22-GraphCAD: https://arxiv.org/pdf/2108.07516.pdf
Python
29
star
63

GRAND-plus

Code and dataset for paper "GRAND+: Scalable Graph Random Neural Networks"
Python
29
star
64

ChatGLM-Math

Python
28
star
65

GIAAD

Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track
Python
22
star
66

Tsinghua-ML-Course

Course Materials for ML Course at Tsinghua
HTML
22
star
67

GLM-iprompt

Apply Iprompt on GLM with innovative new methods. Currently support Chinese QA, English QA and Chinese poem generation.
Python
21
star
68

ApeGNN

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
Python
19
star
69

HOSMEL

A task relevant entity linking toolkit
Python
17
star
70

tdgia

code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)
Python
17
star
71

MRT

MRT: Tracing the Evolution of Scientific Publications (TKDE 2021)
17
star
72

eTrust

Source code and dataset for TKDE 2019 paper “Trust Relationship Prediction in Alibaba E-Commerce Platform”
C++
16
star
73

BatchSampler

The source code for BatchSampler that accepted in KDD'23
Python
16
star
74

LargeScale

Python
15
star
75

Efficient-Head-Finetuning

Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Python
13
star
76

IGB

Source code and dataset for IJCAI 2022 paper "Rethinking the Setting of Semi-supervised Learning on Graphs"
Python
12
star
77

Self-Contrast

Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
Python
11
star
78

paper-source-trace

Python
8
star
79

citation-prediction

Python
8
star
80

RecDCL

RecDCL: Dual Contrastive Learning for Recommendation (WWW'24, Oral)
Python
8
star
81

Refined-cora-citeseer

6
star
82

scholar-profiling

Jupyter Notebook
6
star
83

DropConn

DropConn: Dropout Connection Based Random GNNs for Molecular Property Prediction (TKDE'24)
Python
5
star
84

OAG-entity-alignment

Python
5
star
85

STAM

Source code and dataset for WWW'22 paper "STAM: A Spatiotemporal Aggregation Method for Graph Neural Network-based Recommendation"
Python
4
star
86

whoiswho-top-solutions

Python
4
star
87

Paper-Rec

Python
3
star
88

DistAlign-GNNs

Jupyter Notebook
3
star
89

open_clip_pix2struct

pix2struct version of open_clip
Jupyter Notebook
3
star
90

RecNS

Source code and dataset for TKDE'22 paper "Region or Global? A Principle for Negative Sampling in Graph-based Recommendation"
Python
3
star
91

tot-prediction

Python
2
star
92

Reviewer-Rec

Python
2
star
93

oag-author-tagging

Python
2
star
94

OAG-taxo

Python
1
star