• Stars
    star
    364
  • Rank 117,101 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 7 years ago
  • Updated almost 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting

This is the Pytorch implement of CVPR 2016 paper on Context Encoders

corrupted result

1) Semantic Inpainting Demo

  1. Install PyTorch http://pytorch.org/

  2. Clone the repository

git clone https://github.com/BoyuanJiang/context_encoder_pytorch.git
  1. Demo

    Download pre-trained model on Paris Streetview from Google Drive OR BaiduNetdisk

    cp netG_streetview.pth context_encoder_pytorch/model/
    cd context_encoder_pytorch/model/
    # Inpainting a batch iamges
    python test.py --netG model/netG_streetview.pth --dataroot dataset/val --batchSize 100
    # Inpainting one image 
    python test_one.py --netG model/netG_streetview.pth --test_image result/test/cropped/065_im.png

2) Train on your own dataset

  1. Build dataset

    Put your images under dataset/train,all images should under subdirectory

    dataset/train/subdirectory1/some_images

    dataset/train/subdirectory2/some_images

    ...

    Note:For Google Policy,Paris StreetView Dataset is not public data,for research using please contact with pathak22. You can also use The Paris Dataset to train your model

  2. Train

python train.py --cuda --wtl2 0.999 --niter 200
  1. Test

    This step is similar to Semantic Inpainting Demo