• Stars
    star
    5,000
  • Rank 7,959 (Top 0.2 %)
  • Language
    Python
  • License
    MIT License
  • Created about 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

guided-diffusion

This is the codebase for Diffusion Models Beat GANS on Image Synthesis.

This repository is based on openai/improved-diffusion, with modifications for classifier conditioning and architecture improvements.

Download pre-trained models

We have released checkpoints for the main models in the paper. Before using these models, please review the corresponding model card to understand the intended use and limitations of these models.

Here are the download links for each model checkpoint:

Sampling from pre-trained models

To sample from these models, you can use the classifier_sample.py, image_sample.py, and super_res_sample.py scripts. Here, we provide flags for sampling from all of these models. We assume that you have downloaded the relevant model checkpoints into a folder called models/.

For these examples, we will generate 100 samples with batch size 4. Feel free to change these values.

SAMPLE_FLAGS="--batch_size 4 --num_samples 100 --timestep_respacing 250"

Classifier guidance

Note for these sampling runs that you can set --classifier_scale 0 to sample from the base diffusion model. You may also use the image_sample.py script instead of classifier_sample.py in that case.

  • 64x64 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --dropout 0.1 --image_size 64 --learn_sigma True --noise_schedule cosine --num_channels 192 --num_head_channels 64 --num_res_blocks 3 --resblock_updown True --use_new_attention_order True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/64x64_classifier.pt --classifier_depth 4 --model_path models/64x64_diffusion.pt $SAMPLE_FLAGS
  • 128x128 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 128 --learn_sigma True --noise_schedule linear --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 0.5 --classifier_path models/128x128_classifier.pt --model_path models/128x128_diffusion.pt $SAMPLE_FLAGS
  • 256x256 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion.pt $SAMPLE_FLAGS
  • 256x256 model (unconditional):
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 10.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion_uncond.pt $SAMPLE_FLAGS
  • 512x512 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 512 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 False --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 4.0 --classifier_path models/512x512_classifier.pt --model_path models/512x512_diffusion.pt $SAMPLE_FLAGS

Upsampling

For these runs, we assume you have some base samples in a file 64_samples.npz or 128_samples.npz for the two respective models.

  • 64 -> 256:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --large_size 256  --small_size 64 --learn_sigma True --noise_schedule linear --num_channels 192 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python super_res_sample.py $MODEL_FLAGS --model_path models/64_256_upsampler.pt --base_samples 64_samples.npz $SAMPLE_FLAGS
  • 128 -> 512:
MODEL_FLAGS="--attention_resolutions 32,16 --class_cond True --diffusion_steps 1000 --large_size 512 --small_size 128 --learn_sigma True --noise_schedule linear --num_channels 192 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python super_res_sample.py $MODEL_FLAGS --model_path models/128_512_upsampler.pt $SAMPLE_FLAGS --base_samples 128_samples.npz

LSUN models

These models are class-unconditional and correspond to a single LSUN class. Here, we show how to sample from lsun_bedroom.pt, but the other two LSUN checkpoints should work as well:

MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.1 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python image_sample.py $MODEL_FLAGS --model_path models/lsun_bedroom.pt $SAMPLE_FLAGS

You can sample from lsun_horse_nodropout.pt by changing the dropout flag:

MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.0 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python image_sample.py $MODEL_FLAGS --model_path models/lsun_horse_nodropout.pt $SAMPLE_FLAGS

Note that for these models, the best samples result from using 1000 timesteps:

SAMPLE_FLAGS="--batch_size 4 --num_samples 100 --timestep_respacing 1000"

Results

This table summarizes our ImageNet results for pure guided diffusion models:

Dataset FID Precision Recall
ImageNet 64x64 2.07 0.74 0.63
ImageNet 128x128 2.97 0.78 0.59
ImageNet 256x256 4.59 0.82 0.52
ImageNet 512x512 7.72 0.87 0.42

This table shows the best results for high resolutions when using upsampling and guidance together:

Dataset FID Precision Recall
ImageNet 256x256 3.94 0.83 0.53
ImageNet 512x512 3.85 0.84 0.53

Finally, here are the unguided results on individual LSUN classes:

Dataset FID Precision Recall
LSUN Bedroom 1.90 0.66 0.51
LSUN Cat 5.57 0.63 0.52
LSUN Horse 2.57 0.71 0.55

Training models

Training diffusion models is described in the parent repository. Training a classifier is similar. We assume you have put training hyperparameters into a TRAIN_FLAGS variable, and classifier hyperparameters into a CLASSIFIER_FLAGS variable. Then you can run:

mpiexec -n N python scripts/classifier_train.py --data_dir path/to/imagenet $TRAIN_FLAGS $CLASSIFIER_FLAGS

Make sure to divide the batch size in TRAIN_FLAGS by the number of MPI processes you are using.

Here are flags for training the 128x128 classifier. You can modify these for training classifiers at other resolutions:

TRAIN_FLAGS="--iterations 300000 --anneal_lr True --batch_size 256 --lr 3e-4 --save_interval 10000 --weight_decay 0.05"
CLASSIFIER_FLAGS="--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True"

For sampling from a 128x128 classifier-guided model, 25 step DDIM:

MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --image_size 128 --learn_sigma True --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
CLASSIFIER_FLAGS="--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True --classifier_scale 1.0 --classifier_use_fp16 True"
SAMPLE_FLAGS="--batch_size 4 --num_samples 50000 --timestep_respacing ddim25 --use_ddim True"
mpiexec -n N python scripts/classifier_sample.py \
    --model_path /path/to/model.pt \
    --classifier_path path/to/classifier.pt \
    $MODEL_FLAGS $CLASSIFIER_FLAGS $SAMPLE_FLAGS

To sample for 250 timesteps without DDIM, replace --timestep_respacing ddim25 to --timestep_respacing 250, and replace --use_ddim True with --use_ddim False.

More Repositories

1

whisper

Robust Speech Recognition via Large-Scale Weak Supervision
Python
57,624
star
2

openai-cookbook

Examples and guides for using the OpenAI API
MDX
55,428
star
3

gym

A toolkit for developing and comparing reinforcement learning algorithms.
Python
33,715
star
4

CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Jupyter Notebook
21,231
star
5

gpt-2

Code for the paper "Language Models are Unsupervised Multitask Learners"
Python
20,844
star
6

chatgpt-retrieval-plugin

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Python
20,818
star
7

openai-python

The official Python library for the OpenAI API
Python
19,939
star
8

gpt-3

GPT-3: Language Models are Few-Shot Learners
15,573
star
9

baselines

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Python
15,252
star
10

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Python
13,483
star
11

triton

Development repository for the Triton language and compiler
C++
11,038
star
12

DALL-E

PyTorch package for the discrete VAE used for DALL·E.
Python
10,672
star
13

shap-e

Generate 3D objects conditioned on text or images
Python
10,285
star
14

spinningup

An educational resource to help anyone learn deep reinforcement learning.
Python
8,587
star
15

tiktoken

tiktoken is a fast BPE tokeniser for use with OpenAI's models.
Python
8,533
star
16

universe

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.
Python
7,385
star
17

jukebox

Code for the paper "Jukebox: A Generative Model for Music"
Python
7,326
star
18

openai-node

The official Node.js / Typescript library for the OpenAI API
TypeScript
6,824
star
19

point-e

Point cloud diffusion for 3D model synthesis
Python
5,777
star
20

consistency_models

Official repo for consistency models.
Python
5,725
star
21

plugins-quickstart

Get a ChatGPT plugin up and running in under 5 minutes!
Python
4,133
star
22

transformer-debugger

Python
3,607
star
23

retro

Retro Games in Gym
C
3,289
star
24

glide-text2im

GLIDE: a diffusion-based text-conditional image synthesis model
Python
3,277
star
25

glow

Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
Python
3,016
star
26

mujoco-py

MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.
Cython
2,586
star
27

openai-quickstart-node

Node.js example app from the OpenAI API quickstart tutorial
JavaScript
2,501
star
28

weak-to-strong

Python
2,341
star
29

improved-gan

Code for the paper "Improved Techniques for Training GANs"
Python
2,218
star
30

improved-diffusion

Release for Improved Denoising Diffusion Probabilistic Models
Python
2,102
star
31

roboschool

DEPRECATED: Open-source software for robot simulation, integrated with OpenAI Gym.
Python
2,064
star
32

image-gpt

Python
1,990
star
33

consistencydecoder

Consistency Distilled Diff VAE
Python
1,933
star
34

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"
Python
1,929
star
35

multiagent-particle-envs

Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,871
star
36

gpt-2-output-dataset

Dataset of GPT-2 outputs for research in detection, biases, and more
Python
1,865
star
37

pixel-cnn

Code for the paper "PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications"
Python
1,856
star
38

human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"
Python
1,755
star
39

requests-for-research

A living collection of deep learning problems
HTML
1,625
star
40

openai-quickstart-python

Python example app from the OpenAI API quickstart tutorial
1,608
star
41

gpt-discord-bot

Example Discord bot written in Python that uses the completions API to have conversations with the `text-davinci-003` model, and the moderations API to filter the messages.
Python
1,569
star
42

multi-agent-emergence-environments

Environment generation code for the paper "Emergent Tool Use From Multi-Agent Autocurricula"
Python
1,557
star
43

evolution-strategies-starter

Code for the paper "Evolution Strategies as a Scalable Alternative to Reinforcement Learning"
Python
1,504
star
44

generating-reviews-discovering-sentiment

Code for "Learning to Generate Reviews and Discovering Sentiment"
Python
1,491
star
45

neural-mmo

Code for the paper "Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents"
Python
1,463
star
46

sparse_attention

Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
Python
1,347
star
47

maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Python
1,284
star
48

prm800k

800,000 step-level correctness labels on LLM solutions to MATH problems
Python
1,239
star
49

Video-Pre-Training

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
Python
1,205
star
50

following-instructions-human-feedback

1,129
star
51

universe-starter-agent

A starter agent that can solve a number of universe environments.
Python
1,086
star
52

lm-human-preferences

Code for the paper Fine-Tuning Language Models from Human Preferences
Python
1,067
star
53

dalle-2-preview

1,049
star
54

InfoGAN

Code for reproducing key results in the paper "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"
Python
1,029
star
55

procgen

Procgen Benchmark: Procedurally-Generated Game-Like Gym-Environments
C++
972
star
56

supervised-reptile

Code for the paper "On First-Order Meta-Learning Algorithms"
JavaScript
955
star
57

blocksparse

Efficient GPU kernels for block-sparse matrix multiplication and convolution
Cuda
941
star
58

openai-openapi

OpenAPI specification for the OpenAI API
917
star
59

automated-interpretability

Python
875
star
60

grade-school-math

Python
859
star
61

kubernetes-ec2-autoscaler

A batch-optimized scaling manager for Kubernetes
Python
849
star
62

random-network-distillation

Code for the paper "Exploration by Random Network Distillation"
Python
847
star
63

summarize-from-feedback

Code for "Learning to summarize from human feedback"
Python
833
star
64

large-scale-curiosity

Code for the paper "Large-Scale Study of Curiosity-Driven Learning"
Python
798
star
65

multiagent-competition

Code for the paper "Emergent Complexity via Multi-agent Competition"
Python
761
star
66

imitation

Code for the paper "Generative Adversarial Imitation Learning"
Python
643
star
67

deeptype

Code for the paper "DeepType: Multilingual Entity Linking by Neural Type System Evolution"
Python
633
star
68

mlsh

Code for the paper "Meta-Learning Shared Hierarchies"
Python
588
star
69

iaf

Code for reproducing key results in the paper "Improving Variational Inference with Inverse Autoregressive Flow"
Python
499
star
70

mujoco-worldgen

Automatic object XML generation for Mujoco
Python
475
star
71

safety-gym

Tools for accelerating safe exploration research.
Python
421
star
72

vdvae

Repository for the paper "Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images"
Python
407
star
73

coinrun

Code for the paper "Quantifying Transfer in Reinforcement Learning"
C++
381
star
74

robogym

Robotics Gym Environments
Python
370
star
75

weightnorm

Example code for Weight Normalization, from "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks"
Python
357
star
76

atari-py

A packaged and slightly-modified version of https://github.com/bbitmaster/ale_python_interface
C++
354
star
77

openai-gemm

Open single and half precision gemm implementations
C
335
star
78

vime

Code for the paper "Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks"
Python
331
star
79

safety-starter-agents

Basic constrained RL agents used in experiments for the "Benchmarking Safe Exploration in Deep Reinforcement Learning" paper.
Python
312
star
80

ebm_code_release

Code for Implicit Generation and Generalization with Energy Based Models
Python
311
star
81

CLIP-featurevis

code for reproducing some of the diagrams in the paper "Multimodal Neurons in Artificial Neural Networks"
Python
294
star
82

gym-http-api

API to access OpenAI Gym from other languages via HTTP
Python
291
star
83

gym-soccer

Python
289
star
84

robosumo

Code for the paper "Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments"
Python
283
star
85

EPG

Code for the paper "Evolved Policy Gradients"
Python
240
star
86

phasic-policy-gradient

Code for the paper "Phasic Policy Gradient"
Python
240
star
87

orrb

Code for the paper "OpenAI Remote Rendering Backend"
C#
235
star
88

miniF2F

Formal to Formal Mathematics Benchmark
Objective-C++
202
star
89

web-crawl-q-and-a-example

Learn how to crawl your website and build a Q/A bot with the OpenAI API
Jupyter Notebook
199
star
90

atari-reset

Code for the blog post "Learning Montezuma’s Revenge from a Single Demonstration"
Python
183
star
91

spinningup-workshop

For educational materials related to the spinning up workshops.
TeX
181
star
92

train-procgen

Code for the paper "Leveraging Procedural Generation to Benchmark Reinforcement Learning"
Python
167
star
93

human-eval-infilling

Code for the paper "Efficient Training of Language Models to Fill in the Middle"
Python
142
star
94

dallify-discord-bot

Example code for using OpenAI’s NodeJS SDK with discord.js SDK to create a Discord Bot that uses Slash Commands.
TypeScript
139
star
95

gym3

Vectorized interface for reinforcement learning environments
Python
136
star
96

lean-gym

Lean
134
star
97

retro-baselines

Publicly releasable baselines for the Retro contest
Python
128
star
98

neural-gpu

Code for the Neural GPU model originally described in "Neural GPUs Learn Algorithms"
Python
120
star
99

baselines-results

Jupyter Notebook
117
star
100

go-vncdriver

Fast VNC driver
Go
116
star