• Stars
    star
    576
  • Rank 77,502 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ECCV] Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration. Advances in Image Manipulation (AIM) workshop ECCV 2022. Try it out! over 3.3M runs https://replicate.com/mv-lab/swin2sr

Swin2SR @ ECCV 2022 AIM Workshop

Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration

arXiv visitors google colab logo Hugging Face Spaces kaggle logo


Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte

Computer Vision Lab, CAIDAS, University of Wรผrzburg | MegaStudyEdu, South Korea

TLDR: Photorealistic super-resolution of compressed images using transformers / neural networks.

At AISP there is more work on image processing, low-level vision and computational photography.


News ๐Ÿš€๐Ÿš€

swin2sr-replicate


This is the official repository and PyTorch implementation of Swin2SR. We provide the supplementary material, code, pretrained models and demos. Swin2SR represents a possible improvement of the famous SwinIR by Jingyun Liang (kudos for such an amazing contribution โœ‹). Our model achieves state-of-the-art performance in:

swin2sr

Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video".


Contents

  1. Training
  2. Results
  3. Demos
  4. Testing
  5. Citation and Acknowledgement
  6. Contact

Training

The training code is at KAIR. We follow the same training setup as SwinIR by Jingyun Liang. We are working on KAIR integration ๐Ÿ‘€ More details about the training setup in our paper.

Why moving to Swin Transformer V2 ?? paper swinv2 Especially in the case of lightweight super-resolution, we noticed how our model convergence was approximately x2 faster using the same experimental setup as SwinIR. We provide the details in the paper Section 3 and 4.2

Please check our demos ready to run ๐Ÿš€


Results

We achieved state-of-the-art performance on classical, lightweight and real-world image Super-Resolution (SR), JPEG compression artifact reduction, and compressed input super-resolution. We use mainly the DIV2K Dataset and Flickr2K datasets for training, and for testing: RealSRSet, 5images/Classic5/Set5, Set14, BSD100, Urban100 and Manga109

๐ŸŒŽ All visual results of Swin2SR can be downloaded here. We also provide links to download the original datasets. More details in our paper.


Compressed inputs Swin2SR output
frog_in frog_swin2sr
div2k_in div2k_swin2sr
buildings_in buildings_swin2sr

swin2sr makima demo

swin2sr makima demo


๐ŸŒŽ All the qualitative samples can be downloaded here

Basic inference setup

  1. create a folder inputs and put there the input images. The model expects low-quality and low-resolution JPEG compressed images.

  2. select --scale standard is 4, this means we will increase the resolution of the image x4 times. For example for a 1MP image (1000x1000) we will upscale it to near 4K (4000x4000).

  3. run our model using main_test_swin2sr.py and --save_img_only. The pre-trained models are included in our repo, you can download them from here or check the repo releases. It is important to select the proper --task, by default we do compressed input super-resolution compressed_s.

  4. we process the images in inputs/ and the outputs are stored in results/swin2sr_{TASK}_x{SCALE} where TASK and SCALE are the selected options. You can just navigate through results/

python main_test_swin2sr.py --task compressed_sr --scale 4 --training_patch_size 48 --model_path model_zoo/swin2sr/Swin2SR_CompressedSR_X4_48.pth  --folder_lq ./inputs --save_img_only

to reproduce results, calculate metrics and further evaluation, please check the following section Testing.


Demos

๐Ÿ”ฅ ๐Ÿš€ โœ… Kaggle kernel demo ready to run! easy to follow includes testing for multiple SR applications.

Clicke here to see how the Kaggle demo looks like

kaggle demo


Super-Resolution Demo Swin2SR Official is also available in Google Colab google colab logo

We also have an interactive demo, no login required! in Huggingface Spaces ๐Ÿค— just click and upload images.

swin2sr huggingface demo

We are working on more interactive demos ๐Ÿ‘€ Contact us if you have ideas!



Testing

The original evaluation datasets can be downloaded from the following Kaggle Dataset

sr-benchmarks

Classical image super-resolution (SR) Set5 + Set14 + BSD100 + Urban100 + Manga109 - download here

real-world image SR RealSRSet and 5images- download here

grayscale/color JPEG compression artifact reduction Classic5 +LIVE1 - download here

We follow the same evaluation setup as SwinIR by Jingyun Liang

๐Ÿš€ You can check this evaluation process (and the followinf points) in our interactive kernel Official Swin2SR Demo Results


ClassicalSR

python main_test_swin2sr.py --task classical_sr --scale 2 --training_patch_size 64 --model_path model_zoo/swin2sr/Swin2SR_ClassicalSR_X2_64.pth --folder_lq testsets/Set5/LR_bicubic/X2 --folder_gt testsets/Set5/HR

python main_test_swin2sr.py --task classical_sr --scale 4 --training_patch_size 64 --model_path model_zoo/swin2sr/Swin2SR_ClassicalSR_X4_64.pth --folder_lq testsets/Set5/LR_bicubic/X4 --folder_gt testsets/Set5/HR

Lightweight

python main_test_swin2sr.py --task lightweight_sr --scale 2 --training_patch_size 64 --model_path model_zoo/swin2sr/Swin2SR_Lightweight_X2_64.pth --folder_lq testsets/Set5/LR_bicubic/X2 --folder_gt testsets/Set5/HR

RealSR

python main_test_swin2sr.py --task real_sr --scale 4 --model_path model_zoo/swin2sr/Swin2SR_RealworldSR_X4_64_BSRGAN_PSNR.pth --folder_lq testsets/RealSRSet+5images

CompressedSR

python main_test_swin2sr.py --task compressed_sr --scale 4 --training_patch_size 48 --model_path model_zoo/swin2sr/Swin2SR_CompressedSR_X4_48.pth --folder_gt path/to/DIV2K_Valid_HR --folder_lq /path/to/DIV2K_Valid_LR/Compressed_X4

JPEG Compression Artifact Reduction, Dynamic, GrayScale

python main_test_swin2sr.py --task jpeg_car --jpeg 10 --model_path model_zoo/swin2sr/Swin2SR_Jpeg_dynamic.pth --folder_gt /path/to/classic5
python main_test_swin2sr.py --task jpeg_car --jpeg 20 --model_path model_zoo/swin2sr/Swin2SR_Jpeg_dynamic.pth --folder_gt /path/to/classic5
python main_test_swin2sr.py --task jpeg_car --jpeg 30 --model_path model_zoo/swin2sr/Swin2SR_Jpeg_dynamic.pth --folder_gt /path/to/classic5
python main_test_swin2sr.py --task jpeg_car --jpeg 40 --model_path model_zoo/swin2sr/Swin2SR_Jpeg_dynamic.pth --folder_gt /path/to/classic5

Related Work

SwinIR: Image Restoration Using Swin Transformer by Liang et al, ICCVW 2021.

AISP: AI Image Signal Processing by Marcos Conde, Radu Timofte and collaborators, 2022.

AIM 2022 Challenge on Super-Resolution of Compressed Image and Video organized by Ren Yang.

Swin Transformer V2: Scaling Up Capacity and Resolution by Liu et al, CVPR 2022.


Citation and Acknowledgement

@inproceedings{conde2022swin2sr,
  title={{S}win2{SR}: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration},
  author={Conde, Marcos V and Choi, Ui-Jin and Burchi, Maxime and Timofte, Radu},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
  year={2022}
}

@article{liang2021swinir,
  title={SwinIR: Image Restoration Using Swin Transformer},
  author={Liang, Jingyun and Cao, Jiezhang and Sun, Guolei and Zhang, Kai and Van Gool, Luc and Timofte, Radu},
  journal={arXiv preprint arXiv:2108.10257},
  year={2021}
}

This project is released under the Apache 2.0 license. The codes are heavily based on Swin Transformer and SwinV2 Transformer by Ze Liu. We also refer to codes in KAIR, BasicSR and SwinIR. Please also follow their licenses. Thanks for their awesome works.

Contact

Marcos Conde ([email protected]) and Ui-Jin Choi ([email protected]) are the contact persons. Please add in the email subject "swin2sr".

More Repositories

1

InstructIR

[ECCV 2024] InstructIR: High-Quality Image Restoration Following Human Instructions https://huggingface.co/spaces/marcosv/InstructIR
Jupyter Notebook
493
star
2

AISP

AI Image SIgnal Processing and Computational Photography - Bokeh Rendering , Reversed ISP Challenge, Model-Based Image Signal Processors via Learnable Dictionaries. Official repo for NTIRE and AIM Challenges
Jupyter Notebook
338
star
3

nilut

[AAAI 2024] NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement. Project Website https://mv-lab.github.io/nilut/
Jupyter Notebook
63
star
4

kuzushiji-recognition

Kuzushiji Recognition Kaggle 2019. Build a DL model to transcribe ancient Kuzushiji into contemporary Japanese characters. Opening the door to a thousand years of Japanese culture.
Jupyter Notebook
21
star
5

youtube8m-19

Google Research 3rd YouTube-8M Video Understanding Challenge 2019. Temporal localization of topics within video. International Conference on Computer Vision (ICCV) 2019.
Jupyter Notebook
17
star
6

Squanchy-PL

Squanchy is a brand new, easy to learn, general purpose, multi-paradigm, compiled programming language. The language is written from scratch (it includes an integrated lexer, parser, code generator etc). I tried to put together Python and Haskell (or at least the main features from both) in Squanchy. In other words you can use it as a scripting language, can do maths (calculus, matrix ...) due to its functional side, work with data ...
Python
15
star
7

IQA-Conformer-BNS

Conformer and Blind Noisy Students for Improved Image Quality Assessment
Python
12
star
8

commonlit-readability-nlp

CommonLit Readability Prize 2021. NLP competition hosted at Kaggle: https://www.kaggle.com/c/commonlitreadabilityprize/. Rate the complexity of literary passages for grades 3-12 classroom use
Jupyter Notebook
8
star
9

dna-proto-workflow

Snakemake ProtoWorkflow for DNA Analysis.
Python
4
star
10

RSNA-Pulmonary-Embolism-AI-Challenge

2020 RSNA-STR Pulmonary Embolism Detection AI Challenge: Classify Pulmonary Embolism cases in chest CT scans. 12th place solution.
Jupyter Notebook
4
star
11

RSNA-AI-Challenge2019

RSNA Intracranial Hemorrhage Detection. The challenge is to build an algorithm to detect acute intracranial hemorrhage and its subtypes. The Dataset provided by the Radiological Society of North America (RSNA) and MD.ai.
Python
4
star
12

CHAMPS-Kaggle19

Predicting Molecular Properties. CHAMPS Kaggle competition 2019.
Python
2
star
13

VideoAI-Speedrun

VQA & VSR Efficiency and Runtime Evaluation -- AIS 2024 Workshop @ CVPR
Python
2
star
14

mv-lab.github.io

HTML
1
star