Context Encoders: Feature Learning by Inpainting
This is the Pytorch implement of CVPR 2016 paper on Context Encoders
1) Semantic Inpainting Demo
-
Install PyTorch http://pytorch.org/
-
Clone the repository
git clone https://github.com/BoyuanJiang/context_encoder_pytorch.git
-
Demo
Download pre-trained model on Paris Streetview from Google Drive OR BaiduNetdisk
cp netG_streetview.pth context_encoder_pytorch/model/ cd context_encoder_pytorch/model/ # Inpainting a batch iamges python test.py --netG model/netG_streetview.pth --dataroot dataset/val --batchSize 100 # Inpainting one image python test_one.py --netG model/netG_streetview.pth --test_image result/test/cropped/065_im.png
2) Train on your own dataset
-
Build dataset
Put your images under dataset/train,all images should under subdirectory
dataset/train/subdirectory1/some_images
dataset/train/subdirectory2/some_images
...
Note:For Google Policy,Paris StreetView Dataset is not public data,for research using please contact with pathak22. You can also use The Paris Dataset to train your model
-
Train
python train.py --cuda --wtl2 0.999 --niter 200
-
Test
This step is similar to Semantic Inpainting Demo