Im2Pencil
Pytorch implementation of our CVPR19 paper on controllable pencil illustration from photographs. More results and comparisons are shown here.
A line input (left) and two pencil outline results (middle: clean, right: rough)
A photo input (left) and four pencil shading results (right: [hatching, crosshatching; blending, stippling])
Getting started
- Linux
- NVIDIA GPU
- Pytorch 0.4.1
- MATLAB
- Structured Edge Detection Toolbox by Piotr Dollar
git clone https://github.com/Yijunmaverick/Im2Pencil
cd Im2Pencil
Preparation
- Download the pretrained models:
sh pretrained_models/download_models.sh
- Extract the outline and tone image from the input photo (in MATLAB):
cd extract_edge_tone
Im2Pencil_get_edge_tone.m
Testing
- Test with different outline and shading styles๏ผ
python test.py --outline_style 1 --shading_style 1
Outline style: 0 for rough
and 1 for clean
Shading style: 0, 1, 2, 3 for hatching
, crosshatching
, stippling
, and blending
respectively
For other controllable parameters, check options/test_options.py
Citation
@inproceedings{Im2Pencil-CVPR-2019,
author = {Li, Yijun and Fang, Chen and Hertzmann, Aaron and Shechtman, Eli and Yang, Ming-Hsuan},
title = {Im2Pencil: Controllable Pencil Illustration from Photographs},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
year = {2019}
}