• Stars
    star
    121
  • Rank 293,924 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created about 3 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch implementation of D2C: Diffuison-Decoding Models for Few-shot Conditional Generation.

D2C: Diffusion-Decoding Models for Few-shot Conditional Generation

Project | Paper

Open In Collab

PyTorch implementation of D2C: Diffuison-Decoding Models for Few-shot Conditional Generation.

Abhishek Sinha*, Jiaming Song*, Chenlin Meng, Stefano Ermon

Stanford University

Overview

Conditional generative models of high-dimensional images have many applications, but supervision signals from conditions to images can be expensive to acquire. This paper describes Diffusion-Decoding models with Contrastive representations (D2C), a paradigm for training unconditional variational autoencoders (VAEs) for few-shot conditional image generation. By learning from as few as 100 labeled examples, D2C can be used to generate images with a certain label or manipulate an existing image to contain a certain label. Compared with state-of-the-art StyleGAN2 methods, D2C is able to manipulate certain attributes efficiently while keeping the other details intact.

Here are some example for image manipulation. You can see more results here.

Attribute Original D2C StyleGAN2 NVAE DDIM
Blond
Red Lipstick
Beard

Getting started

The code has been tested on PyTorch 1.9.1 (CUDA 10.2).

To use the checkpoints, download the checkpoints from this link, under the checkpoints/ directory.

# Requires gdown >= 4.2.0, install with pip
gdown https://drive.google.com/drive/u/1/folders/1DvApt-uO3uMRhFM3eIqPJH-HkiEZC1Ru -O ./ --folder

Examples

The main.py file provides some basic scripts to perform inference on the checkpoints.

We will release training code soon on a separate repo, as the GPU memory becomes a bottleneck if we train the model jointly.

Example to perform image manipulation:

  • Red lipstick
python main.py ffhq_256 manipulation --d2c_path checkpoints/ffhq_256/model.ckpt --boundary_path checkpoints/ffhq_256/red_lipstick.ckpt --step 10 --image_dir images/red_lipstick --save_location results/red_lipstick
  • Beard
python main.py ffhq_256 manipulation --d2c_path checkpoints/ffhq_256/model.ckpt --boundary_path checkpoints/ffhq_256/beard.ckpt --step 20 --image_dir images/beard --save_location results/beard
  • Blond
python main.py ffhq_256 manipulation --d2c_path checkpoints/ffhq_256/model.ckpt --boundary_path checkpoints/ffhq_256/blond.ckpt --step -15 --image_dir images/blond --save_location results/blond

Example to perform unconditional image generation:

python main.py ffhq_256 sample_uncond --d2c_path checkpoints/ffhq_256/model.ckpt --skip 100 --save_location results/uncond_samples

Extensions

We implement a D2C class here that contains an autoencoder and a diffusion latent model. See code structure here.

Useful functions include: image_to_latent, latent_to_image, sample_latent, manipulate_latent, postprocess_latent, which are also called in main.py.

Todo

  • Release checkpoints and models for other datasets.
  • Release code for conditional generation.
  • Release training code and procedure to convert into inference model.
  • Train on higher resolution images.

References and Acknowledgements

If you find this repository useful for your research, please cite our work.

@inproceedings{sinha2021d2c,
  title={D2C: Diffusion-Denoising Models for Few-shot Conditional Generation},
  author={Sinha*, Abhishek and Song*, Jiaming and Meng, Chenlin and Ermon, Stefano},
  year={2021},
  month={December},
  abbr={NeurIPS 2021},
  url={https://arxiv.org/abs/2106.06819},
  booktitle={Neural Information Processing Systems},
  html={https://d2c-model.github.io}
}

This implementation is based on:

More Repositories

1

wgan

Tensorflow Implementation of Wasserstein GAN (and Improved version in wgan_v2)
Python
238
star
2

fast-weights

Implementation of the paper [Using Fast Weights to Attend to the Recent Past](https://arxiv.org/abs/1610.06258)
Python
172
star
3

cramer-gan

Tensorflow Implementation on "The Cramer Distance as a Solution to Biased Wasserstein Gradients" (https://arxiv.org/pdf/1705.10743.pdf)
Python
125
star
4

ml-cpc

Jupyter Notebook
36
star
5

ais

Annealed Importance Sampling (AIS) for generative models.
Python
16
star
6

tsong.me

You can use my code, but I would appreciate it if you provide a link back to my website. Also do not mess up with Google Analytics!
HTML
13
star
7

d2c_pre_release

Python
10
star
8

infusion

Reproducing an experiment from ICLR 2017 submission "Learning to Generate Samples from Noise Through Infusion Training"
Python
7
star
9

cv

Curriculum Vitae using LaTeX.
TeX
5
star
10

icm2015

Source for our paper for the Interdisciplinary Contest in Modeling, which won the Outstanding Winner award (<1%).
TeX
4
star
11

ddrm-exp-datasets

Demo images for DDRM paper
4
star
12

kl_wgan_sim

Jupyter Notebook
4
star
13

scholar-bibtex-keys

Convert bibtex keys to Google scholar style: [first-author-last-name][year][title-first-word]
TeX
3
star
14

academic

Too many papers to update!
TeX
3
star
15

biopedia

Data center skeleton for the Tsinghua Bioinformatics Group. The website is currently maintained by someone else.
CSS
3
star
16

tiny-gpt2

2
star
17

dsp-srt

TeX
1
star
18

tusk

Tsinghua University Serach Kit
CSS
1
star
19

soahomework1

Tiny website using Weibo's OAuth2 API. Experiment mode. Only limited Weibo users are allowed.
Python
1
star