• Stars
    star
    182
  • Rank 211,154 (Top 5 %)
  • Language
    HTML
  • Created over 7 years ago
  • Updated over 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A pytorch implementation of pix2pix + BEGAN (Boundary Equilibrium Generative Adversarial Networks)

pix2pix + BEGAN

Install

Dataset

Train

  • pix2pixGAN
  • CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
  • pix2pixBEGAN
  • CUDA_VISIBLE_DEVICES=x python main_pix2pixBEGAN.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
  • Most of the parameters are the same for a fair comparision.
  • The original pix2pix is modelled as a conditional GAN, however we didn't. Input samples are not given in D(Only target samples are given)
  • We used the image-buffer(analogyous to replay-buffer in DQN) in training D.
  • Try other datasets as your need. Similar results will be found.

Training Curve(pix2pixBEGAN)

  • L_D and L_G \w BEGAN

loss

  • We found out both L_D and L_G are balanced consistently(equilibrium parameter, gamma=0.7) and converged, even thought network D and G are different in terms of model capacity and detailed layer specification.

  • M_global

Mglobal

  • As the author said, M_global is a good indicator for monitoring convergence.

  • Parsing log: train-log file will be saved in the driectory, you specified, named as train.log

  • L_D and L_G \w GAN

BEGAN_loss

Comparison

  • pix2pixGAN vs. pix2pixBEGAN
  • CUDA_VISIBLE_DEVICES=x python compare.py --netG_GAN /path/to/netG.pth --netG_BEGAN /path/to/netG.pth --exp /path/to/a/dir/for/saving --tstDataroot /path/to/facades/test/ failure GANvsBEGAN
  • Checkout more results(order in input, real-target, fake(pix2pixBEGAN), fake(pix2pixGAN))
  • Interpolation on the input-space.
  • CUDA_VISIBLE_DEVICES=x python interpolateInput.py --tstDataroot ~/path/to/your/facades/test/ --interval 14 --exp /path/to/resulting/dir --tstBatchSize 4 --netG /path/to/your/netG_epoch_xxx.pth
  • Upper rows: pix2pixGAN, Lower rows: pix2pixBEGAN interpolation

Showing reconstruction from D and generation from G

  • (order in input, real-target, reconstructed-real, fake, reconstructed-fake) reconDandGenG

Reference

misc.

  • We apologize for your inconvenience when cloning this project. Size of resulting images are huge. please be patient.(Downloading zip file seems to need less time.)