Awesome Image Quality Assessment (IQA)
A comprehensive collection of IQA papers, datasets and codes. We also provide PyTorch implementations of mainstream metrics in IQA-PyTorch
Related Resources:
- Awesome Image Aesthetic Assessment and Cropping. A curated list of resources including papers, datasets, and relevant links to aesthetic evaluation and cropping.
Papers
No Reference (NR)
-
[CVPR2023]
Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild, Saha et al. Bibtex -
[CVPR2023]
Blind Image Quality Assessment via Vision-Language Correspondence: A Multitask Learning Perspective, Zhang et al. Github | Bibtex -
[CVPR2023]
Quality-aware Pre-trained Models for Blind Image Quality Assessment, Zhao et al. Bibtex -
[AAAI2023]
Exploring CLIP for Assessing the Look and Feel of Images, Wang et al. Github | Bibtex -
[AAAI2023]
Data-Efficient Image Quality Assessment with Attention-Panel Decoder, Qin et al. Github | Bibtex -
[TPAMI2022]
Continual Learning for Blind Image Quality Assessment , Zhang et al. Github | Bibtex -
[TIP2022]
VCRNet: Visual Compensation Restoration Network for No-Reference Image Quality Assessment, Pan et al. Github | Bibtex -
[TMM2022]
GraphIQA: Learning Distortion Graph Representations for Blind Image Quality Assessment, Sun et al. Github | Bibtex -
[CVPR2021]
Troubleshooting Blind Image Quality Models in the Wild, Wang et al. Github | Bibtex
Paper Link | Method | Type | Published | Code | Keywords |
---|---|---|---|---|---|
arXiv | MANIQA | NR | CVPRW2022 | Official | Transformer, multi-dimension attention, dual branch |
arXiv | TReS | NR | WACV2022 | Official | Transformer, relative ranking, self-consistency |
KonIQ++ | NR | BMVC2021 | Official | Multi-task with distortion prediction | |
arXiv | MUSIQ | NR | ICCV2021 | Official / Pytorch | Multi-scale, transformer, Aspect Ratio Preserved (ARP) resizing |
arXiv | CKDN | NR | ICCV2021 | Official | Degraded reference, Conditional knowledge distillation (related to HIQA) |
HyperIQA | NR | CVPR2020 | Official | Content-aware hyper network | |
arXiv | Meta-IQA | NR | CVPR2020 | Official | Meta-learning |
arXiv | GIQA | NR | ECCV2020 | Official | Generated image |
arXiv | PI | NR | 2018 PIRM Challenge | Project | 1/2 * (NIQE + (10 - NRQM)). |
arXiv | HIQA | NR | CVPR2018 | Project | Hallucinated reference |
arXiv | BPSQM | NR | CVPR2018 | Pixel-wise quality map | |
arXiv | RankIQA | NR | ICCV2017 | Github | Pretrain on synthetically ranked data |
CNNIQA | NR | CVPR2014 | PyTorch | First CNN-based NR-IQA | |
arXiv | UNIQUE | NR | TIP2021 | Github | Combine synthetic and authentic image pairs |
arXiv | DBCNN | NR | TCSVT2020 | Official | Two branches for synthetic and authentic distortions |
SFA | NR | TMM2019 | Official | Aggregate ResNet50 features of multiple cropped patches | |
pdf/arXiv | PQR | NR/Aesthetic | TIP2019 | Official1/Official2 | Unify different type of aesthetic labels |
arXiv | WaDIQaM (deepIQA) | NR/FR | TIP2018 | PyTorch | Weighted average of patch qualities, shared FR/NR models |
NIMA | NR | TIP2018 | PyTorch/Tensorflow | Squared EMD loss | |
MEON | NR | TIP2017 | Multi-task: distortion learning and quality prediction | ||
arXiv | dipIQ | NR | TIP2017 | download | Similar to RankIQA |
arXiv | NRQM (Ma) | NR | CVIU2017 | Project | Traditional, Super resolution |
arXiv | FRIQUEE | NR | JoV2017 | Official | Authentically Distorted, Bag of Features |
IEEE | HOSA | NR | TIP2016 | Matlab download | Traditional |
ILNIQE | NR | TIP2015 | Official | Traditional | |
BRISQUE | NR | TIP2012 | Official | Traditional | |
BLIINDS-II | NR | TIP2012 | Official | ||
CORNIA | NR | CVPR2012 | Matlab download | Codebook Representation | |
NIQE | NR | SPL2012 | Official | Traditional | |
DIIVINE | NR | TIP2011 | Official |
Full Reference (FR)
[ECCV2022]
Shift-tolerant Perceptual Similarity Metric, Ghildyal et al. Github | Bibtex[BMVC2022]
Content-Diverse Comparisons improve IQA, Thong et al. Bibtex[ACM MM2022]
Quality Assessment of Image Super-Resolution: Balancing Deterministic and Statistical Fidelity, Zhou et al. Github | Bibtex
Paper Link | Method | Type | Published | Code | Keywords |
---|---|---|---|---|---|
arXiv | AHIQ | FR | CVPR2022 NTIRE workshop | Official | Attention, Transformer |
arXiv | JSPL | FR | CVPR2022 | Official | semi-supervised and positive-unlabeled (PU) learning |
arXiv | CVRKD | NAR | AAAI2022 | Official | Non-Aligned content reference, knowledge distillation |
arXiv | IQT | FR | CVPRW2021 | PyTorch | Transformer |
arXiv | A-DISTS | FR | ACMM2021 | Official | |
arXiv | DISTS | FR | TPAMI2021 | Official | |
arXiv | LPIPS | FR | CVPR2018 | Project | Perceptual similarity, Pairwise Preference |
arXiv | PieAPP | FR | CVPR2018 | Project | Perceptual similarity, Pairwise Preference |
arXiv | WaDIQaM | NR/FR | TIP2018 | Official | |
arXiv | JND-SalCAR | FR | TCSVT2020 | JND (Just-Noticeable-Difference) | |
QADS | FR | TIP2019 | Project | Super-resolution | |
FSIM | FR | TIP2011 | Project | Traditional | |
VIF/IFC | FR | TIP2006 | Project | Traditional | |
MS-SSIM | FR | Project | Traditional | ||
SSIM | FR | TIP2004 | Project | Traditional | |
PSNR | FR | Traditional |
Image Aesthetic Assessment
[CVPR2023]
VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining, Ke et al. Bibtex[CVPR2023]
Towards Artistic Image Aesthetics Assessment: a Large-scale Dataset and a New Method, Yi et al. Github | Bibtex
Others
Title | Method | Published | Code | Keywords |
---|---|---|---|---|
arXiv | NiNLoss | ACMM2020 | Official | Norm-in-Norm Loss |
Datasets
IQA datasets
Paper Link | Dataset Name | Type | Published | Website | Images | Annotations |
---|---|---|---|---|---|---|
arXiv | PaQ-2-PiQ | NR | CVPR2020 | Official github | 40k, 120k patches | 4M |
CVF | SPAQ | NR | CVPR2020 | Offical github | 11k (smartphone) | |
arXiv | KonIQ-10k | NR | TIP2020 | Project | 10k from YFCC100M | 1.2M |
arXiv | CLIVE | NR | TIP2016 | Project | 1200 | 350k |
AVA | NR / Aesthentic | CVPR2012 | Github/Project | 250k (60 categories) | ||
arXiv | PIPAL | FR | ECCV2020 | Project | 250 | 1.13M |
arXiv | KADIS-700k | FR | arXiv | Project | 140k pristine / 700k distorted | 30 ratings (DCRs) per image. |
IEEE | KADID-10k | FR | QoMEX2019 | Project | 81 | 10k distortions |
Waterloo-Exp | FR | TIP2017 | Project | 4744 | 94k distortions | |
MDID | FR | PR2017 | --- | 20 | 1600 distortions | |
TID2013 | FR | SP2015 | Project | 25 | 3000 distortions | |
LIVEMD | FR | ACSSC2012 | Project | 15 pristine images | two successive distortions | |
CSIQ | FR | Journal of Electronic Imaging 2010 | --- | 30 | 866 distortions | |
TID2008 | FR | 2009 | Project | 25 | 1700 distortions | |
LIVE IQA | FR | TIP2006 | Project | 29 images, 780 synthetic distortions | ||
link | IVC | FR | 2005 | --- | 10 | 185 distortions |
Perceptual similarity datasets
Paper Title | Dataset Name | Type | Published | Website | Images | Annotations |
---|---|---|---|---|---|---|
arXiv | BAPPS(LPIPS) | FR | CVPR2018 | Project | 187.7k | 484k |
arXiv | PieAPP | FR | CVPR2018 | Project | 200 images | 2.3M |