Denoising Diffusion Probabilistic Models
Unofficial PyTorch implementation of Denoising Diffusion Probabilistic Models [1].
This implementation follows the most of details in official TensorFlow implementation [2]. I use PyTorch coding style to port [2] to PyTorch and hope that anyone who is familiar with PyTorch can easily understand every implementation details.
TODO
- Datasets
- Support CIFAR10
- Support LSUN
- Support CelebA-HQ
- Featurex
- Gradient accumulation
- Multi-GPU training
- Reproducing Experiment
- CIFAR10
Requirements
-
Python 3.6
-
Packages Upgrade pip for installing latest tensorboard
pip install -U pip setuptools pip install -r requirements.txt
-
Download precalculated statistic for dataset:
Create folder
stats
forcifar10.train.npz
.stats └── cifar10.train.npz
Train From Scratch
- Take CIFAR10 for example:
python main.py --train \ --flagfile ./config/CIFAR10.txt
- [Optional] Overwrite arguments
python main.py --train \ --flagfile ./config/CIFAR10.txt \ --batch_size 64 \ --logdir ./path/to/logdir
- [Optional] Select GPU IDs
CUDA_VISIBLE_DEVICES=1 python main.py --train \ --flagfile ./config/CIFAR10.txt
- [Optional] Multi-GPU training
CUDA_VISIBLE_DEVICES=0,1,2,3 python main.py --train \ --flagfile ./config/CIFAR10.txt \ --parallel
Evaluate
- A
flagfile.txt
is autosaved to your log directory. The default logdir forconfig/CIFAR10.txt
is./logs/DDPM_CIFAR10_EPS
- Start evaluation
python main.py \ --flagfile ./logs/DDPM_CIFAR10_EPS/flagfile.txt \ --notrain \ --eval
- [Optional] Multi-GPU evaluation
CUDA_VISIBLE_DEVICES=0,1,2,3 python main.py \ --flagfile ./logs/DDPM_CIFAR10_EPS/flagfile.txt \ --notrain \ --eval \ --parallel
Reproducing Experiment
CIFAR10
The checkpoint can be downloaded from my drive.