Revisiting Image Pyramid Structure for High Resolution Salient Object Detection (InSPyReNet)
Official PyTorch implementation of PyTorch implementation of Revisiting Image Pyramid Structure for High Resolution Salient Object Detection (InSPyReNet)
To appear in the 16th Asian Conference on Computer Vision (ACCV2022)
Taehun Kim, Kunhee Kim, Joonyeong Lee, Dongmin Cha, Jiho Lee, Daijin Kim
Abstract: Salient object detection (SOD) has been in the spotlight recently, yet has been studied less for high-resolution (HR) images. Unfortunately, HR images and their pixel-level annotations are certainly more labor-intensive and time-consuming compared to low-resolution (LR) images. Therefore, we propose an image pyramid-based SOD framework, Inverse Saliency Pyramid Reconstruction Network (InSPyReNet), for HR prediction without any of HR datasets. We design InSPyReNet to produce a strict image pyramid structure of saliency map, which enables to ensemble multiple results with pyramid-based image blending. For HR prediction, we design a pyramid blending method which synthesizes two different image pyramids from a pair of LR and HR scale from the same image to overcome effective receptive field (ERF) discrepancy. Our extensive evaluation on public LR and HR SOD benchmarks demonstrates that InSPyReNet surpasses the State-of-the-Art (SotA) methods on various SOD metrics and boundary accuracy.
Contents
- News
- Demo
- Applications
- Easy Download
- Getting Started
- Model Zoo
- Results
- Citation
- Acknowledgement
- References
📰
News [2022.10.04] TasksWithCode mentioned our work in Blog and reproducing our work on Colab. Thank you for your attention!
[2022.10.20] We trained our model on Dichotomous Image Segmentation dataset (DIS5K) and showed competitive results! Trained checkpoint and pre-computed segmentation masks are available in Model Zoo). Also, you can check our qualitative and quantitative results in Results section.
[2022.10.28] Multi GPU training for latest pytorch is now available.
[2022.10.31] TasksWithCode provided an amazing web demo with HuggingFace. Visit the WepApp and try with your image!
[2022.11.09] LaneSOD
repository.
[2022.11.18] I am speaking at The 16th Asian Conference on Computer Vision (ACCV2022). Please check out my talk if you're attending the event! #ACCV2022 #Macau - via #Whova event app
[2022.11.23] We made our work available on pypi package. Please visit transparent-background
to download our tool and try on your machine. It works as command-line tool and python API.
[2023.01.18] rsreetech shared a tutorial for our pypi package transparent-background
using colab.
🚀
Demo Image Sample | Video Sample |
---|---|
🎮
Applications Here are some applications/extensions of our work.
Web Application
TasksWithCode provided WepApp on HuggingFace to generate your own results!
Web Demo |
---|
📟
Command-line Tool / Python API Try using our model as command-line tool or python API. More details about how to use is available on transparent-background
.
pip install transparent-background
🚗
Lane Segmentation We extend our model to detect lane markers in a driving scene in LaneSOD
Lane Segmentation |
---|
🍰
Easy Download How to use easy download
Downloading each dataset, checkpoint is quite bothering, even for me ImageNet pre-trained backbone checkpoints
, Training Datasets
, Testing Datasets for benchmark
, Pre-trained model checkpoints
, Pre-computed saliency maps
with single command below.
python utils/download.py --extra --dest [DEST]
--extra, -e
: Without this argument, only the datasets, checkpoint, and results from our main paper will be downloaded. Otherwise, all data will be downloaded including results from supplementary material and DIS5K results.--dest [DEST], -d [DEST]
: If you want to specify the destination, use this argument. It will automatically create a symbolic links of the destination folders insidedata
andsnapshots
. Use this argument if you want to download data on other physical disk. Otherwise, it will download inside this repository folder.
If you want to download a certain checkpoint or pre-computed map, please refer to Getting Started and Model Zoo.
🛫
Getting Started Please refer to getting_started.md for training, testing, and evaluating on benchmarks, and inferencing on your own images.
🦒
Model Zoo Please refer to model_zoo.md for downloading pre-trained models and pre-computed saliency maps.
💯
Results Quantitative Results
LR Benchmark | HR Benchmark | HR Benchmark (Trained with extra DB) | DIS |
---|---|---|---|
Qualitative Results
DAVIS-S & HRSOD | UHRSD | UHRSD (w/ HR scale) | DIS |
---|---|---|---|
Citation
@inproceedings{kim2022revisiting,
title={Revisiting Image Pyramid Structure for High Resolution Salient Object Detection},
author={Kim, Taehun and Kim, Kunhee and Lee, Joonyeong and Cha, Dongmin and Lee, Jiho and Kim, Daijin},
booktitle={Proceedings of the Asian Conference on Computer Vision},
pages={108--124},
year={2022}
}
Acknowledgement
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2017-0-00897, Development of Object Detection and Recognition for Intelligent Vehicles) and (No.B0101-15-0266, Development of High Performance Visual BigData Discovery Platform for Large-Scale Realtime Data Analysis)
🎉
Special Thanks to - TasksWithCode team for sharing our work and making the most amazing web app demo.
References
Related Works
- Towards High-Resolution Salient Object Detection (paper | github)
- Disentangled High Quality Salient Object Detection (paper | github)
- Pyramid Grafting Network for One-Stage High Resolution Saliency Detection (paper | github)
Resources
-
Backbones: Res2Net, Swin Transformer
-
Datasets
-
Evaluation Toolkit
- SOD Metrics (e.g., S-measure): PySOD Metrics
- Boundary Metric (mBA): CascadePSP