• Stars
    star
    302
  • Rank 138,030 (Top 3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPRW 2022] MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment

MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment

Sidi Yang*, Tianhe Wu*, Shuwei Shi, Shanshan Lao, Yuan Gong, Mingdeng Cao, Jiahao Wang and Yujiu Yang

Tsinghua University Intelligent Interaction Group

🚀 🚀 🚀 Updates:

  • something more...
  • ✅ Mar. 11, 2023: Model trained with Koniq10k dataset checkpoint has be released.
  • ✅ Mar. 10, 2023: We release the large dataset (kadid10k) checkpoint and add the predicting one image files.
  • ✅ April. 11, 2022: We release the MANIQA source code and the checkpoint of PIPAL22.

paper download Open issue Closed issue visitors IIGROUP GitHub Stars

This repository is the official PyTorch implementation of MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment. 🔥🔥🔥 We won first place in the NTIRE2022 Perceptual Image Quality Assessment Challenge Track 2 No-Reference competition.

Ground Truth Distortion 1 Distortion 2 Distortion 3 Distortion 4
MOS (GT) 1539.1452 (1) 1371.4593 (2) 1223.4258 (3) 1179.6223 (4)
Ours (MANIQA) 0.743674 (1) 0.625845 (2) 0.504243 (3) 0.423222 (4)
MOS (GT) 4.33 (1) 2.27 (2) 1.33 (3) 1.1 (4)
Ours (MANIQA) 0.8141 (1) 0.2615 (2) 0.0871 (3) 0.0490 (4)
Model: 0.3398 Model: 0.2612 Model: 0.3078 Model: 0.3716 Model: 0.3581

No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception. Unfortunately, existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images. To this end, we propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion. We firstly extract features via ViT, then to strengthen global and local interactions, we propose the Transposed Attention Block (TAB) and the Scale Swin Transformer Block (SSTB). These two modules apply attention mechanisms across the channel and spatial dimension, respectively. In this multi-dimensional manner, the modules cooperatively increase the interaction among different regions of images globally and locally. Finally, a dual branch structure for patch-weighted quality prediction is applied to predict the final score depending on the weight of each patch's score. Experimental results demonstrate that MANIQA outperforms state-of-the-art methods on four standard datasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our method ranked first place in the final testing phase of the NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference.


Network Architecture

image.png

Dataset

The PIPAL22 dataset is used in NTIRE22 competition and we test our model in PIPAL21.
We also conducted experiments on LIVE, CSIQ, TID2013 and KADID-10K datasets.

Attention:

  • Put the MOS label and the data python files into data folder.
  • The validation dataset comes from NTIRE 2021. If you want to reproduce the results on validation or test set for NTIRE 2022 NR-IQA competition, register the competition and upload the submission.zip by following the instruction on the website.

Checkpoints

Click into the website and download the pretrained model checkpoints, ignoring the source files (tag Koniq-10k has the latest source file).

Training Set Testing Set Checkpoints of MANIQA
PIPAL2022 dataset (200 reference images, 23200 distorted images, MOS scores for each distorted image) [Validation] PIPAL2022 dataset (1650 distorted images) download
SRCC:0.686, PLCC:0.707
KADID-10K dataset (81 reference images and 10125 distorted images). 8000 distorted images for training KADID-10K dataset. 2125 distorted images for testing download
SRCC:0.939, PLCC:0.939
KONIQ-10K dataset (in-the-wild database, consisting of 10,073 quality scored images). 8058 distorted images for training KONIQ-10K dataset. 2015 distorted images for testing download
SRCC:0.930, PLCC:0.946

Usage

Training MANIQA model

  • Modify "dataset_name" in config
  • Modify train dataset path: "train_dis_path"
  • Modify validation dataset path: "val_dis_path"
python train_maniqa.py

Predicting one image quality score

  • Modify the path of image "image_path"
  • Modify the path of checkpoint "ckpt_path"
python predict_one_image.py 

Inference for PIPAL22 validing and testing

Generating the ouput file:

  • Modify the path of dataset "test_dis_path"
  • Modify the trained model path "model_path"
python inference.py

Results

image.png

Environments

  • Platform: PyTorch 1.8.0
  • Language: Python 3.7.9
  • Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-104-generic x86_64)
  • CUDA Version 11.2
  • GPU: NVIDIA GeForce RTX 3090 with 24GB memory

Requirements

Python requirements can installed by:

pip install -r requirements.txt

Citation

@inproceedings{yang2022maniqa,
  title={MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment},
  author={Yang, Sidi and Wu, Tianhe and Shi, Shuwei and Lao, Shanshan and Gong, Yuan and Cao, Mingdeng and Wang, Jiahao and Yang, Yujiu},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={1191--1200},
  year={2022}
}

Acknowledgment

Our codes partially borrowed from anse3832 and timm. Thanks for the SwinIR Readme.md. We modify ours file like them.

Related Work

NTIRE2021 IQA Full-Reference Competition

[CVPRW 2021] Region-Adaptive Deformable Network for Image Quality Assessment (4th place in FR track)

paper code

NTIRE2022 IQA Full-Reference Competition

[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network. (1th place in FR track)

paper code

More Repositories

1

TediGAN

[CVPR 2021] Pytorch implementation for TediGAN: Text-Guided Diverse Face Image Generation and Manipulation
Python
373
star
2

MM-CelebA-HQ-Dataset

[CVPR 2021] Multi-Modal-CelebA-HQ: A Large-Scale Text-Driven Face Generation and Understanding Dataset
Python
215
star
3

AHIQ

[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network
Python
74
star
4

RADN

[CVPRW 2021] Codes for Region-Adaptive Deformable Network for Image Quality Assessment
Python
62
star
5

interpGaze

[ACM MM 2020] Code and Data For Controllable Continuous Gaze Redirection.
Python
40
star
6

MAP

Python
31
star
7

SCL

Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning
Python
20
star
8

PUM

[CVPR 2021] Pytorch implementation for Probabilistic Modeling of Semantic Ambiguity for Scene Graph Generation
Python
17
star
9

HVISNet

Code and Data for Real-time Human-Centric Segmentation for Complex Video Scenes
Python
15
star
10

AutoIE2

[NLPCC 2021] Shared Task on AutoIE2: Sub-Event Identification
Python
14
star
11

AttentionProbe

[ICASSP 2022] Official PyTorch Implementation for "Attention Probe: Vision Transformer Distillation in the Wild" (ICASSP 2022)
Python
11
star
12

PoseDet

[FG 2021] Code for PoseDet: Fast Multi-Person Pose Estimation Using Pose Embedding
Python
11
star
13

MIRTT

[EMNLP 2021] MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question Answering
Python
8
star
14

CNN-FCF

[CVPR 2019] Compressing Convolutional Neural Networks via Factorized Convolutional Filters.
Python
4
star