Uformer: A General U-Shaped Transformer for Image Restoration (CVPR 2022)
Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, Houqiang Li
Update:
- 2022.07.06 Upload new codes and models for our Uformer.
- 2022.04.09 Upload results of Uformer on denoising (SIDD, DND), motion deblurring (GoPro, HIDE, RealBlur-J/-R), and defocus deblurring (DPDD).
- 2022.03.02 Uformer has been accepted by CVPR 2022!
π₯ - 2021.11.30 Update Uformer in Arxiv link. The new code, models and results will be uploaded.
- 2021.10.28 Release the results of Uformer32 on SIDD and DND.
- 2021.09.30 Release pre-trained Uformer16 for SIDD denoising.
- 2021.08.19 Release a pre-trained model(Uformer32)! Add a script for FLOP/GMAC calculation.
- 2021.07.29 Add a script for testing the pre-trained model on the arbitrary-resolution images.
In this paper, we present Uformer, an effective and efficient Transformer-based architecture, in which we build a hierarchical encoder-decoder network using the Transformer block for image restoration. Uformer has two core designs to make it suitable for this task. The first key element is a local-enhanced window Transformer block, where we use non-overlapping window-based self-attention to reduce the computational requirement and employ the depth-wise convolution in the feed-forward network to further improve its potential for capturing local context. The second key element is that we explore three skip-connection schemes to effectively deliver information from the encoder to the decoder. Powered by these two designs, Uformer enjoys a high capability for capturing useful dependencies for image restoration. Extensive experiments on several image restoration tasks demonstrate the superiority of Uformer, including image denoising, deraining, deblurring and demoireing. We expect that our work will encourage further research to explore Transformer-based architectures for low-level vision tasks.
Package dependencies
The project is built with PyTorch 1.9.0, Python3.7, CUDA11.1. For package dependencies, you can install them by:
pip install -r requirements.txt
Pretrained model
Results from the pretrained model
- Uformer_B: SIDD | DND | GoPro | HIDE | RealBlur-J | RealBlur-R | DPDD
Data preparation
Denoising
For training data of SIDD, you can download the SIDD-Medium dataset from the official url. Then generate training patches for training by:
python3 generate_patches_SIDD.py --src_dir ../SIDD_Medium_Srgb/Data --tar_dir ../datasets/denoising/sidd/train
For evaluation on SIDD and DND, you can download data from here.
Deblurring
For training on GoPro, and evaluation on GoPro, HIDE, RealBlur-J and RealBlur-R, you can download data from here.
Then put all the denoising data into ../datasets/denoising
, and all the deblurring data into ../datasets/deblurring
.
Training
Denoising
To train Uformer on SIDD, you can begin the training by:
sh script/train_denoise.sh
Deblurring
To train Uformer on GoPro, you can begin the training by:
sh script/train_motiondeblur.sh
Evaluation
To evaluate Uformer, you can run:
sh script/test.sh
For evaluate on each dataset, you should uncomment corresponding line.
Computational Cost
We provide a simple script to calculate the flops by ourselves, a simple script has been added in model.py
. You can change the configuration and run:
python3 model.py
The manual calculation of GMacs in this repo differs slightly from the main paper, but they do not influence the conclusion. We will correct the paper later.
Citation
If you find this project useful in your research, please consider citing:
@InProceedings{Wang_2022_CVPR,
author = {Wang, Zhendong and Cun, Xiaodong and Bao, Jianmin and Zhou, Wengang and Liu, Jianzhuang and Li, Houqiang},
title = {Uformer: A General U-Shaped Transformer for Image Restoration},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {17683-17693}
}
Acknowledgement
This code borrows heavily from MIRNet and SwinTransformer.
Contact
Please contact us if there is any question or suggestion(Zhendong Wang [email protected], Xiaodong Cun [email protected]).