EDSR in Tensorflow
TensorFlow implementation of Enhanced Deep Residual Networks for Single Image Super-Resolution[1].
It was trained on the Div2K dataset - Train Data (HR images).
Google Summer of Code with OpenCV
This repository was made during the 2019 GSoC program for the organization OpenCV. The trained models (.pb files) can easily be used for inference in OpenCV with the 'dnn_superres' module. See the OpenCV documentation for how to do this.
Requirements
- tensorflow
- numpy
- cv2
EDSR
This is the EDSR model, which has a different model for each scale. Architecture shown below. Go to branch 'mdsr' for the MDSR model.
Running
Download Div2K dataset. If you want to use another dataset, you will have to calculate the mean of that dataset, and set the new mean in 'main.py'. Code for calculating the mean can be found in data_utils.py.
Train:
-
from scratch
python main.py --train --fromscratch --scale <scale> --traindir /path-to-train-images/
-
resume/load previous
python main.py --train --scale <scale> --traindir /path-to-train-images/
Test (compares edsr with bicubic with PSNR metric):
python main.py --test --scale <scale> --image /path-to-image/
Upscale (with edsr):
python main.py --upscale --scale <scale> --image /path-to-image/
Export to .pb
python main.py --export --scale <scale>
Extra arguments (Nr of resblocks, filters, batch, lr etc.)
python main.py --help
Example
(1) Original picture
(2) Input image
(3) Bicubic scaled (3x) image
(4) EDSR scaled (3x) image
Notes
The .pb files in these repository are quantized. This is done purely to shrink the filesizes down from ~150MB to ~40MB, because GitHub does not allow uploads above 100MB. The performance loss due to quantization is minimal. (To quantize during exporting use $ --quant <1,2 or 3> (2 is recommended.))
References
[1] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," 2nd NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution in conjunction with CVPR 2017. [PDF] [arXiv] [Slide]