• Stars
    star
    562
  • Rank 79,281 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 3 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[NeurIPS 2022] Denoising Diffusion Restoration Models -- Official Code Repository

Denoising Diffusion Restoration Models (DDRM)

arXiv | PDF | Project Website

Bahjat Kawar1, Michael Elad1, Stefano Ermon2, Jiaming Song2
1 Technion, 2Stanford University

DDRM uses pre-trained DDPMs for solving general linear inverse problems. It does so efficiently and without problem-specific supervised training.

ddrm-overview

Running the Experiments

The code has been tested on PyTorch 1.8 and PyTorch 1.10. Please refer to environment.yml for a list of conda/mamba environments that can be used to run the code.

Pretrained models

We use pretrained models from https://github.com/openai/guided-diffusion, https://github.com/pesser/pytorch_diffusion and https://github.com/ermongroup/SDEdit

We use 1,000 images from the ImageNet validation set for comparison with other methods. The list of images is taken from https://github.com/XingangPan/deep-generative-prior/

The models and datasets are placed in the exp/ folder as follows:

<exp> # a folder named by the argument `--exp` given to main.py
β”œβ”€β”€ datasets # all dataset files
β”‚   β”œβ”€β”€ celeba # all CelebA files
β”‚   β”œβ”€β”€ imagenet # all ImageNet files
β”‚   β”œβ”€β”€ ood # out of distribution ImageNet images
β”‚   β”œβ”€β”€ ood_bedroom # out of distribution bedroom images
β”‚   β”œβ”€β”€ ood_cat # out of distribution cat images
β”‚   └── ood_celeba # out of distribution CelebA images
β”œβ”€β”€ logs # contains checkpoints and samples produced during training
β”‚   β”œβ”€β”€ celeba
β”‚   β”‚   └── celeba_hq.ckpt # the checkpoint file for CelebA-HQ
β”‚   β”œβ”€β”€ diffusion_models_converted
β”‚   β”‚   └── ema_diffusion_lsun_<category>_model
β”‚   β”‚       └── model-x.ckpt # the checkpoint file saved at the x-th training iteration
β”‚   β”œβ”€β”€ imagenet # ImageNet checkpoint files
β”‚   β”‚   β”œβ”€β”€ 256x256_classifier.pt
β”‚   β”‚   β”œβ”€β”€ 256x256_diffusion.pt
β”‚   β”‚   β”œβ”€β”€ 256x256_diffusion_uncond.pt
β”‚   β”‚   β”œβ”€β”€ 512x512_classifier.pt
β”‚   β”‚   └── 512x512_diffusion.pt
β”œβ”€β”€ image_samples # contains generated samples
└── imagenet_val_1k.txt # list of the 1k images used in ImageNet-1K.

We note that some models may not generate high-quality samples in unconditional image synthesis; this is especially the case for the pre-trained CelebA model.

Sampling from the model

The general command to sample from the model is as follows:

python main.py --ni --config {CONFIG}.yml --doc {DATASET} --timesteps {STEPS} --eta {ETA} --etaB {ETA_B} --deg {DEGRADATION} --sigma_0 {SIGMA_0} -i {IMAGE_FOLDER}

where the following are options

  • ETA is the eta hyperparameter in the paper. (default: 0.85)
  • ETA_B is the eta_b hyperparameter in the paper. (default: 1)
  • STEPS controls how many timesteps used in the process.
  • DEGREDATION is the type of degredation allowed. (One of: cs2, cs4, inp, inp_lolcat, inp_lorem, deno, deblur_uni, deblur_gauss, deblur_aniso, sr2, sr4, sr8, sr16, sr_bicubic4, sr_bicubic8, sr_bicubic16 color)
  • SIGMA_0 is the noise observed in y.
  • CONFIG is the name of the config file (see configs/ for a list), including hyperparameters such as batch size and network architectures.
  • DATASET is the name of the dataset used, to determine where the checkpoint file is found.
  • IMAGE_FOLDER is the name of the folder the resulting images will be placed in (default: images)

For example, for sampling noisy 4x super resolution from the ImageNet 256x256 unconditional model using 20 steps:

python main.py --ni --config imagenet_256.yml --doc imagenet --timesteps 20 --eta 0.85 --etaB 1 --deg sr4 --sigma_0 0.05

The generated images are place in the <exp>/image_samples/{IMAGE_FOLDER} folder, where orig_{id}.png, y0_{id}.png, {id}_-1.png refer to the original, degraded, restored images respectively.

The config files contain a setting controlling whether to test on samples from the trained dataset's distribution or not.

Images for Demonstration Purposes

A list of images for demonstration purposes can be found here: https://github.com/jiamings/ddrm-exp-datasets. Place them under the <exp>/datasets folder, and these commands can be excecuted directly:

CelebA noisy 4x super-resolution:

python main.py --ni --config celeba_hq.yml --doc celeba --timesteps 20 --eta 0.85 --etaB 1 --deg sr4 --sigma_0 0.05 -i celeba_hq_sr4_sigma_0.05

General content images uniform deblurring:

python main.py --ni --config imagenet_256.yml --doc imagenet_ood --timesteps 20 --eta 0.85 --etaB 1 --deg deblur_uni --sigma_0 0.0 -i imagenet_sr4_sigma_0.0

Bedroom noisy 4x super-resolution:

python main.py --ni --config bedroom.yml --doc bedroom --timesteps 20 --eta 0.85 --etaB 1 --deg sr4 --sigma_0 0.05 -i bedroom_sr4_sigma_0.05

References and Acknowledgements

@inproceedings{kawar2022denoising,
    title={Denoising Diffusion Restoration Models},
    author={Bahjat Kawar and Michael Elad and Stefano Ermon and Jiaming Song},
    booktitle={Advances in Neural Information Processing Systems},
    year={2022}
}

This implementation is based on / inspired by: