fast-neural-style-tensorflow
A tensorflow implementation for Perceptual Losses for Real-Time Style Transfer and Super-Resolution.
This code is based on Tensorflow-Slim and OlavHN/fast-neural-style.
Samples:
configuration | style | sample |
---|---|---|
wave.yml | ||
cubist.yml | ||
denoised_starry.yml | ||
mosaic.yml | ||
scream.yml | ||
feathers.yml | ||
udnie.yml |
Requirements and Prerequisites:
- Python 2.7.x
- Now support Tensorflow >= 1.0
Attention: This code also supports Tensorflow == 0.11. If it is your version, use the commit 5309a2a (git reset --hard 5309a2a).
And make sure you installed pyyaml:
pip install pyyaml
Use Trained Models:
You can download all the 7 trained models from Baidu Drive.
To generate a sample from the model "wave.ckpt-done", run:
python eval.py --model_file <your path to wave.ckpt-done> --image_file img/test.jpg
Then check out generated/res.jpg.
Train a Model:
To train a model from scratch, you should first download VGG16 model from Tensorflow Slim. Extract the file vgg_16.ckpt. Then copy it to the folder pretrained/ :
cd <this repo>
mkdir pretrained
cp <your path to vgg_16.ckpt> pretrained/
Then download the COCO dataset. Please unzip it, and you will have a folder named "train2014" with many raw images in it. Then create a symbol link to it:
cd <this repo>
ln -s <your path to the folder "train2014"> train2014
Train the model of "wave":
python train.py -c conf/wave.yml
(Optional) Use tensorboard:
tensorboard --logdir models/wave/
Checkpoints will be written to "models/wave/".
View the configuration file for details.