• Stars
    star
    828
  • Rank 53,718 (Top 2 %)
  • Language
    Python
  • Created about 8 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Implementation of Super Resolution CNN in Keras.

Image Super Resolution using in Keras 2+

Implementation of Image Super Resolution CNN in Keras from the paper Image Super-Resolution Using Deep Convolutional Networks.

Also contains models that outperforms the above mentioned model, termed Expanded Super Resolution, Denoiseing Auto Encoder SRCNN which outperforms both of the above models and Deep Denoise SR, which with certain limitations, outperforms all of the above.

Setup

Supports Keras with Theano and Tensorflow backend. Due to recent report that Theano will no longer be updated, Tensorflow is the default backend for this project now.

Requires Pillow, imageio, sklearn, scipy, keras 2.3.1, tensorflow 1.15.0

Usage

Note: The project is going to be reworked. Therefore please refer to Framework-Updates.md to see the changes which will affect performance.

The model weights are already provided in the weights folder, therefore simply running :
python main.py "imgpath", where imgpath is a full path to the image.

The default model is DDSRCNN (dsr), which outperforms the other three models. To switch models,
python main.py "imgpath" --model="type", where type = sr, esr, dsr, ddsr

If the scaling factor needs to be altered then :
python main.py "imgpath" --scale=s, where s can be any number. Default s = 2

If the intermediate step (bilinear scaled image) is needed, then:
python main.py "imgpath" --scale=s --save_intermediate="True"

Window Helper

The windows_helper script contains a C# program for Windows to easily use the Super Resolution script using any of the available models.

Parameters

--model : Can be one of "sr" (Image Super Resolution), "esr" (Expanded SR), "dsr" (Denoiseing Auto Encoder SR), "ddsr" (Deep Denoise SR), "rnsr" (ResNet SR) or "distilled_rnsr" (Distilled ResNet SR)
--scale : Scaling factor can be any integer number. Default is 2x scaling.
--save_intermediate= : Save the intermediate results before applying the Super Resolution algorithm.
--mode : "fast" or "patch". Patch mode can be useful for memory constrained GPU upscaling, whereas fast mode submits whole image for upscaling in one pass.
--suffix : Suffix of the scaled image filename
--patch_size : Used only when patch mode is used. Sets the size of each patch

Model Architecture

Super Resolution CNN (SRCNN)

The model above is the simplest model of the ones described in the paper above, consisting of the 9-1-5 model. Larger architectures can be easily made, but come at the cost of execution time, especially on CPU.

However there are some differences from the original paper:
[1] Used the Adam optimizer instead of RMSProp.
[2] This model contains some 21,000 parameters, more than the 8,400 of the original paper.

It is to be noted that the original models underperform compared to the results posted in the paper. This may be due to the only 91 images being the training set compared to the entire ILSVR 2013 image set. It still performs well, however images are slightly noisy.

Expanded Super Resolution CNN (ESRCNN)

The above is called "Expanded SRCNN", which performs slightly worse than the default SRCNN model on Set5 (PSNR 31.78 dB vs 32.4 dB).

The "Expansion" occurs in the intermediate hidden layer, in which instead of just 1x1 kernels, we also use 3x3 and 5x5 kernels in order to maximize information learned from the layer. The outputs of this layer are then averaged, in order to construct more robust upscaled images.

Denoiseing (Auto Encoder) Super Resolution CNN (DSRCNN)

The above is the "Denoiseing Auto Encoder SRCNN", which performs even better than SRCNN on Set5 (PSNR 32.57 dB vs 32.4 dB).

This model uses bridge connections between the convolutional layers of the same level in order to speed up convergence and improve output results. The bridge connections are averaged to be more robust.

Since the training images are passed through a gausian filter (sigma = 0.5), then downscaled to 1/3rd the size, then upscaled to the original 33x33 size images, the images can be considered "noisy". Thus, this auto encoder quickly improves on the earlier results, and reduces the noisy output image problem faced by the simpler SRCNN model.

Deep Denoiseing Super Resolution (DDSRCNN)

The above is the "Deep Denoiseing SRCNN", which is a modified form of the architecture described in the paper "Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections" applied to image super-resolution. It can perform far better than even the Denoiseing SRCNN, but is currently not working properly.

Similar to the paper Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections, this can be considered a highly simplified and shallow model compared to the 30 layer architecture used in the above paper.

ResNet Super Resolution (ResNet SR)

The above is the "ResNet SR" model, derived from the "SRResNet" model of the paper [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https://arxiv.org/abs/1609.04802)

Currently uses only 6 residual blocks and 2x upscaling rather than the 15 residual blocks and the 4x upscaling from the paper.

Efficient SubPixel Convolutional Neural Network (ESPCNN)

The above model is the Efficient Subpixel Convolution Neural Network which uses the Subpixel Convolution layers to upscale rather than UpSampling or Deconvolution. Currently has not been trained properly.

GAN Image Super Resolution (GANSR)

The above model is the GAN trained Image Super Resolution network based on the ResNet SR and the SRGAN from the paper above.

Note : Does not work properly right now.

Distilled ResNet Super Resolution (Distilled ResNetSR)

The above model is a smaller ResNet SR that was trained using model distilation techniques from the "teacher" model - the original larger ResNet SR (with 6 residual blocks).

The model was trained via the distill_network.py script which can be used to perform distilation training from any teacher network onto a smaller 'student' network.

Non-Local ResNet Super Resolution (Non-Local ResNetSR)

The above model is a trial to see if Non-Local blocks can obtain better super resolution.

Various issues :

  1. They break the fully convolutional behaviour of the network. Due to the flatten and reshape parts of this module, you need to have a set size for the image when building it.

Therefore you cannot construct one model and then pass random size input images to evaluate.

  1. The non local blocks require vast amount of memory as their intermediate products. I think this is the reason they suggested to use this at the end of the network where the spatial dimension is just 14x14 or 7x7.

I had consistent ooms when trying it on multiple positions of a super resolution network, and could only successfully place it at the last ResNet block without oom (on just 4 GB 980M).

Finally, I was able to train a model anyway and it got pretty high psnr scores. I wasn't able to evaluate that, and was able to distill the model into ordinary ResNet. It got exactly same psnr score as the original non local model. Evaluating that, all the images were a little smoothed out. This is worse than a distilled ResNet which obtains a lower psnr score but sharper images.

Training

If you wish to train the network on your own data set, follow these steps (Performance may vary) :
[1] Save all of your input images of any size in the "input_images" folder
[2] Run img_utils.py function, transform_images(input_path, scale_factor). By default, input_path is "input_images" path. Note: Unless you are training ESPCNN, set the variable true_upsampling to False and then run the img_utils.py script to generate the dataset. Only for ESPCNN training do you need to set true_upsampling to True.
[3] Open tests.py and un-comment the lines at model.fit(...), where model can be sr, esr or dsr, ddsr.
Note: It may be useful to save the original weights in some other location.
[4] Execute tests.py to begin training. GPU is recommended, although if small number of images are provided then GPU may not be required.

Caveats

Very large images may not work with the GPU. Therefore,
[1] If using Theano, set device="cpu" and cnmem=0.0 in theanorc.txt
[2] If using Tensorflow, set it to cpu mode

On the CPU, extremely high resolution images of the size upto 6000 x 6000 pixels can be handled if 16 GB RAM is provided.

Examples

There are 14 extra images provided in results, 2 of which (Monarch Butterfly and Zebra) have been scaled using both bilinear, SRCNN, ESRCNN and DSRCNN.

Monarch Butterfly

Bilinear SRCNN
ESRCNN DDSRCNN

Zebra

Bilinear SRCNN
ESRCNN DDSRCNN

More Repositories

1

Neural-Style-Transfer

Keras Implementation of Neural Style Transfer from the paper "A Neural Algorithm of Artistic Style" (http://arxiv.org/abs/1508.06576) in Keras 2.0+
Jupyter Notebook
2,265
star
2

neural-image-assessment

Implementation of NIMA: Neural Image Assessment in Keras
Python
764
star
3

LSTM-FCN

Codebase for the paper LSTM Fully Convolutional Networks for Time Series Classification
Python
735
star
4

DenseNet

DenseNet implementation in Keras
Python
707
star
5

MLSTM-FCN

Multivariate LSTM Fully Convolutional Networks for Time Series Classification
Python
476
star
6

neural-architecture-search

Basic implementation of [Neural Architecture Search with Reinforcement Learning](https://arxiv.org/abs/1611.01578).
Python
424
star
7

keras-squeeze-excite-network

Implementation of Squeeze and Excitation Networks in Keras
Python
401
star
8

Inception-v4

Inception-v4, Inception - Resnet-v1 and v2 Architectures in Keras
Python
384
star
9

Keras-Classification-Models

Collection of Keras models used for classification
Python
317
star
10

Snapshot-Ensembles

Snapshot Ensemble in Keras
Python
304
star
11

keras-non-local-nets

Keras implementation of Non-local Neural Networks
Python
291
star
12

keras-one-cycle

Implementation of One-Cycle Learning rate policy (adapted from Fast.ai lib)
Python
285
star
13

Super-Resolution-using-Generative-Adversarial-Networks

An implementation of SRGAN model in Keras
Python
281
star
14

tf-TabNet

A Tensorflow 2.0 implementation of TabNet.
Python
235
star
15

Keras-ResNeXt

Implementation of ResNeXt models from the paper Aggregated Residual Transformations for Deep Neural Networks in Keras 2.0+.
Python
223
star
16

tfdiffeq

Tensorflow implementation of Ordinary Differential Equation Solvers with full GPU support
Python
213
star
17

Keras-NASNet

"NASNet" models in Keras 2.0+ with weights
Python
200
star
18

keras-efficientnets

Keras Implementation of EfficientNets
Python
188
star
19

tf_SIREN

Tensorflow 2.0 implementation of Sinusodial Representation networks (SIREN)
Python
148
star
20

keras-coordconv

Keras implementation of CoordConv for all Convolution layers
Python
148
star
21

MobileNetworks

Keras implementation of Mobile Networks
Python
131
star
22

keras-adabound

Keras implementation of AdaBound
Python
130
star
23

keras-attention-augmented-convs

Keras implementation of Attention Augmented Convolutional Neural Networks
Python
120
star
24

progressive-neural-architecture-search

Implementation of Progressive Neural Architecture Search in Keras and Tensorflow
Python
119
star
25

Keras-DualPathNetworks

Dual Path Networks for Keras 2.0+
Python
114
star
26

Wide-Residual-Networks

Wide Residual Networks in Keras
Python
112
star
27

Fast-Neural-Style

Implementation of "Perceptual Losses for Real-Time Style Transfer and Super-Resolution" in Keras
Python
109
star
28

Keras-Group-Normalization

A Keras implementation of https://arxiv.org/abs/1803.08494
Python
103
star
29

BatchRenormalization

Batch Renormalization algorithm implementation in Keras
Python
98
star
30

Nested-LSTM

Keras implementation of Nested LSTMs
Python
90
star
31

keras-SRU

Implementation of Simple Recurrent Unit in Keras
Python
89
star
32

Fully-Connected-DenseNets-Semantic-Segmentation

Fully Connected DenseNet for Image Segmentation (https://arxiv.org/pdf/1611.09326v1.pdf)
Python
84
star
33

keras-LAMB-Optimizer

Implementation of the LAMB optimizer for Keras from the paper "Reducing BERT Pre-Training Time from 3 Days to 76 Minutes"
Python
76
star
34

tf-eager-examples

A set of simple examples ported from PyTorch for Tensorflow Eager Execution
Jupyter Notebook
74
star
35

keras_rectified_adam

Implementation of Rectified Adam in Keras
Python
69
star
36

Keras-IndRNN

Implementation of IndRNN in Keras
Python
67
star
37

LSTM-FCN-Ablation

Repository for the ablation study of "Long Short-Term Memory Fully Convolutional Networks for Time Series Classification"
Python
56
star
38

keras-octconv

Keras implementation of Octave Convolutions
Python
53
star
39

keras-global-context-networks

Keras implementation of Global Context Attention blocks
Python
45
star
40

Neural-Style-Transfer-Windows

Windows Form application written in C# to ease usage of neural style transfer script
Python
43
star
41

tf_fourier_features

Tensorflow 2.0 implementation of Fourier Feature Mapping Networks.
Python
42
star
42

Keras-Multiplicative-LSTM

Miltiplicative LSTM for Keras 2.0+
Python
42
star
43

keras_mixnets

Keras Implementation of MixNets: Mixed Depthwise Convolutions
Python
39
star
44

Keras-just-another-network-JANET

Keras implementation of [The unreasonable effectiveness of the forget gate](https://arxiv.org/abs/1804.04849)
Jupyter Notebook
35
star
45

keras-switchnorm

Switch Normalization implementation for Keras 2+
Python
31
star
46

keras-neural-alu

A Keras implementation of Neural Arithmatic and Logical Unit
Python
27
star
47

keras-mobile-colorizer

U-Net Model conditioned with MobileNet features for Grayscale -> Color mapping
Python
25
star
48

Deep-Columnar-Convolutional-Neural-Network

Deep Columnar Convolutional Neural Network architecture, which is based on Multi Columnar DNN (Ciresan 2012).
Python
24
star
49

keras-SparseNet

Keras Implementation of SparseNets
Python
23
star
50

Residual-of-Residual-Networks

Residual Network of Residual Networks in Keras
Python
23
star
51

pyshac

A Python library for the Sequential Halving and Classification algorithm
Python
21
star
52

Adversarial-Attacks-Time-Series

Codebase for the paper "Adversarial Attacks on Time Series"
Python
20
star
53

keras_novograd

Keras implementation of NovoGrad
Python
20
star
54

simple_diffusion

Simple notebooks to learn diffusion models on toy datasets
Jupyter Notebook
17
star
55

keras-normalized-optimizers

Wrapper for Normalized Gradient Descent in Keras
Jupyter Notebook
17
star
56

keras-padam

Keras implementation of Padam from "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks"
Python
17
star
57

pytorch_odegan

Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Python
16
star
58

tf-sha-rnn

Tensorflow port implementation of Single Headed Attention RNN
Python
16
star
59

warprnnt_numba

WarpRNNT loss ported in Numba CPU/CUDA for Pytorch
Jupyter Notebook
16
star
60

Advanced_Machine_Learning

Python
16
star
61

keras-minimal-rnn

Keras implementation of MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks
Python
16
star
62

dtw-numba

Implementation of Dynamic Time Warping algorithm with speed improvements based on Numba.
Python
15
star
63

TweetSentimentAnalysis

CS583 course project
Python
14
star
64

tf_GON

Tensorflow 2.x implementation of Gradient Origin Networks
Python
13
star
65

lambda_networks_pt

Lambda Networks implemented in PyTorch
Python
12
star
66

tf_neural_deconvolution

Neural Deconvolutions in Tensorflow
Python
12
star
67

Python-Work

Python scripts to facilitate easy working
Jupyter Notebook
11
star
68

PyCTakesParser

Utilities to parse the output of cTAKES
Python
10
star
69

tf_star_rnn

Tensorflow 2.0 implementation of STAR RNN
Python
10
star
70

Deep-Dream

Deep Dream implementation in Keras
Python
9
star
71

Kaggle

Kaggle competition library. Uses Python 3.4.1 with almost all known python libraries for Machine Learning
Python
7
star
72

Music-Recognition

C# project to perform Frequency Analysis of music
C#
5
star
73

Rabin-Karp-String-Matching

C
4
star
74

Data-Science

Library of Data Science classes
Python
3
star
75

diffusion_model_nemo

Python
3
star
76

Ragial-Searcher

The Core Java library used to parse and store Ragial.com data
HTML
3
star
77

MSApriori

Multiple support apriori algorithm in Java
Java
3
star
78

RagialNotifier

Android App to parse ragial.com using the Ragial Searcher library to track items and notify the user if the item is on sale. Developed for the game Ragnarok Online, developed and owned by Gravity Inc.
Java
3
star
79

braindrain-uncommonhacks

JavaScript
2
star
80

IDS-Course-Project

Intro to Data Science Project
Python
2
star
81

ML-Tools

Python
2
star
82

Tiger-Game

Tiger Game in Python 2.7 / 3.4+
Python
2
star
83

8086-Microprocessor

An attempt to emulate an 8086 microprocessor, with its ASM instruction set.
Java
2
star
84

titu1994.github.io

HTML
2
star
85

Adaptive-Sorting-Algorithm

Analysis and implementation of Machine Learning Decision Tree to classify best algorithm for given data set
C#
2
star
86

Optimal-Binary-Search-Tree

C
2
star
87

Naive-String-Matching

C
2
star
88

Recurstion-C

Recursion in C
C
2
star
89

Java-Adaptive-Sorting-Algorithm

Adaptive Sorting Algorithm using Decision Trees to decide which algorithm will be optimal to sort a given dataset.
Java
2
star
90

Rate-Monotonic-Scheduling-Algorithm

Java
1
star
91

WT-Mini-Project

CSS
1
star
92

Kruskals-Algorithm

C
1
star
93

Stack

Stack
C
1
star
94

Doublu-Linked-List

Doubly Linked List
C
1
star
95

CircularLinkedList

Circular Linked List in C
C
1
star
96

SOOAD-Mini-Project

Java
1
star
97

Knuth-Morris-Pratt

C
1
star
98

MyLib

1
star
99

Stack-with-Linked-List

Stack with Linked List
C
1
star
100

College

College Java
Java
1
star