pix2pix + BEGAN
- Image-to-Image Translation with Conditional Adversarial Nets
- BEGAN: Boundary Equilibrium Generative Adversarial Networks
Install
- install pytorch and pytorch.vision
Dataset
- Download images from author's implementation
- Suppose you downloaded "facades" dataset in
/path/to/facades
Train
- pix2pixGAN
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
- pix2pixBEGAN
CUDA_VISIBLE_DEVICES=x python main_pix2pixBEGAN.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
- Most of the parameters are the same for a fair comparision.
- The original pix2pix is modelled as a conditional GAN, however we didn't. Input samples are not given in D(Only target samples are given)
- We used the image-buffer(analogyous to replay-buffer in DQN) in training D.
- Try other datasets as your need. Similar results will be found.
Training Curve(pix2pixBEGAN)
- L_D and L_G \w BEGAN
-
We found out both L_D and L_G are balanced consistently(equilibrium parameter, gamma=0.7) and converged, even thought network D and G are different in terms of model capacity and detailed layer specification.
-
M_global
-
As the author said, M_global is a good indicator for monitoring convergence.
-
Parsing log: train-log file will be saved in the driectory, you specified, named as
train.log
-
L_D and L_G \w GAN
Comparison
- pix2pixGAN vs. pix2pixBEGAN
CUDA_VISIBLE_DEVICES=x python compare.py --netG_GAN /path/to/netG.pth --netG_BEGAN /path/to/netG.pth --exp /path/to/a/dir/for/saving --tstDataroot /path/to/facades/test/
- Checkout more results(order in input, real-target, fake(pix2pixBEGAN), fake(pix2pixGAN))
- Interpolation on the input-space.
CUDA_VISIBLE_DEVICES=x python interpolateInput.py --tstDataroot ~/path/to/your/facades/test/ --interval 14 --exp /path/to/resulting/dir --tstBatchSize 4 --netG /path/to/your/netG_epoch_xxx.pth
- Upper rows: pix2pixGAN, Lower rows: pix2pixBEGAN
Showing reconstruction from D and generation from G
Reference
misc.
- We apologize for your inconvenience when cloning this project. Size of resulting images are huge. please be patient.(Downloading zip file seems to need less time.)