• Stars
    star
    113
  • Rank 310,115 (Top 7 %)
  • Language
    Python
  • License
    Other
  • Created over 2 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[TPAMI 2023 ESI Highly Cited Paper] SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation https://arxiv.org/abs/2204.08808

SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation (TPAMI 2023)

Binhui Xie, Shuang Li, Mingjia Li, Chi Harold Liu, Gao Huang, and Guoren Wang

Paperย ย  Projectย ย 

Update on 2023/11: SePiCo is selected as ๐Ÿ† ESI Highly Cited Paper!!

Update on 2023/02/15: Code release for Cityscapes โ†’ Dark Zurich.

Update on 2023/01/14: ๐Ÿฅณ We are happy to announce that SePiCo has been accepted in an upcoming issue of the TPAMI.

Update on 2022/09/24: All checkpoints are available.

Update on 2022/09/04: Code release.

Update on 2022/04/20: ArXiv Version of SePiCo is available.

Overview

In this work, we propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixel to promote learning of class-discriminative and class-balanced pixel embedding space across domains, eventually boosting the performance of self-training methods.

Installation

This code is implemented with Python 3.8.5 and PyTorch 1.7.1 on CUDA 11.0.

To try out this project, it is recommended to set up a virtual environment first:

# create and activate the environment
conda create --name sepico -y python=3.8.5
conda activate sepico

# install the right pip and dependencies for the fresh python
conda install -y ipython pip

Then, the dependencies can be installed by:

# install required packages
pip install -r requirements.txt

# install mmcv-full, this command compiles mmcv locally and may take some time
pip install mmcv-full==1.3.7  # requires other packeges to be installed first

Alternatively, the mmcv-full package can be installed faster with official pre-built packages, for instance:

# another way to install mmcv-full, faster
pip install mmcv-full==1.3.7 -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html

The environment is now fully prepared.

Datasets Preparation

Download Datasets

  • GTAV: Download all zipped images, along with their zipped labels, from here and extract them to a custom directory.
  • Cityscapes: Download leftImg8bit_trainvaltest.zip and gtFine_trainvaltest.zip from here and extract them to a custom directory.
  • Dark Zurich: Download Dark_Zurich_train_anon.zip, Dark_Zurich_val_anon.zip and Dark_Zurich_test_anon_withoutGt.zip from here and extract them to a custom directory.

Setup Datasets

Symlink the required datasets:

ln -s /path/to/gta5/dataset data/gta
ln -s /path/to/cityscapes/dataset data/cityscapes
ln -s /path/to/dark_zurich/dataset data/dark_zurich

Perform preprocessing to convert label IDs to the train IDs and gather dataset statistics:

python tools/convert_datasets/gta.py data/gta --nproc 8
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8

Ultimately, the data structure should look like this:

SePiCo
โ”œโ”€โ”€ ...
โ”œโ”€โ”€ data
โ”‚   โ”œโ”€โ”€ cityscapes
โ”‚   โ”‚   โ”œโ”€โ”€ gtFine
โ”‚   โ”‚   โ”œโ”€โ”€ leftImg8bit
โ”‚   โ”œโ”€โ”€ dark_zurich
โ”‚   โ”‚   โ”œโ”€โ”€ corresp
โ”‚   โ”‚   โ”œโ”€โ”€ gt
โ”‚   โ”‚   โ”œโ”€โ”€ rgb_anon
โ”‚   โ”œโ”€โ”€ gta
โ”‚   โ”‚   โ”œโ”€โ”€ images
โ”‚   โ”‚   โ”œโ”€โ”€ labels
โ”œโ”€โ”€ ...

Model Zoo

We provide pretrained models of both Domain Adaptive Semantic Segmentation tasks through Google Drive and Baidu Netdisk (access code: pico).

GTAV โ†’ Cityscapes (DeepLab-v2 based)

variants model name mIoU checkpoint download
DistCL sepico_distcl_gta2city_dlv2.pth 61.0 Google / Baidu (acc: pico)
BankCL sepico_bankcl_gta2city_dlv2.pth 59.8 Google / Baidu (acc: pico)
ProtoCL sepico_protocl_gta2city_dlv2.pth 58.8 Google / Baidu (acc: pico)

GTAV โ†’ Cityscapes (DAFormer based)

variants model name mIoU checkpoint download
DistCL sepico_distcl_gta2city_daformer.pth 70.3 Google / Baidu (acc: pico)
BankCL sepico_bankcl_gta2city_daformer.pth 68.7 Google / Baidu (acc: pico)
ProtoCL sepico_protocl_gta2city_daformer.pth 68.5 Google / Baidu (acc: pico)

SYNTHIA โ†’ Cityscapes (DeepLab-v2 based)

variants model name mIoU checkpoint download
DistCL sepico_distcl_syn2city_dlv2.pth 58.1 Google / Baidu (acc: pico)
BankCL sepico_bankcl_syn2city_dlv2.pth 57.4 Google / Baidu (acc: pico)
ProtoCL sepico_protocl_syn2city_dlv2.pth 56.8 Google / Baidu (acc: pico)

SYNTHIA โ†’ Cityscapes (DAFormer based)

variants model name mIoU checkpoint download
DistCL sepico_distcl_syn2city_daformer.pth 64.3 Google / Baidu (acc: pico)
BankCL sepico_bankcl_syn2city_daformer.pth 63.3 Google / Baidu (acc: pico)
ProtoCL sepico_protocl_syn2city_daformer.pth 62.9 Google / Baidu (acc: pico)

Cityscapes โ†’ Dark Zurich (DeepLab-v2 based)

variants model name mIoU checkpoint download
DistCL sepico_distcl_city2dark_dlv2.pth 45.4 Google / Baidu (acc: pico)
BankCL sepico_bankcl_city2dark_dlv2.pth 44.1 Google / Baidu (acc: pico)
ProtoCL sepico_protocl_city2dark_dlv2.pth 42.6 Google / Baidu (acc: pico)

Cityscapes โ†’ Dark Zurich (DAFormer based)

variants model name mIoU checkpoint download
DistCL sepico_distcl_city2dark_daformer.pth 54.2 Google / Baidu (acc: pico)
BankCL sepico_distcl_city2dark_daformer.pth 53.3 Google / Baidu (acc: pico)
ProtoCL sepico_distcl_city2dark_daformer.pth 52.7 Google / Baidu (acc: pico)

Our trained model (sepico_distcl_city2dark_daformer.pth) is also tested for generalization on the Nighttime Driving and BDD100k-night test sets.

Method model name Dark Zurich-test Nighttime Driving BDD100k-night checkpoint download
SePiCo sepico_distcl_city2dark_daformer.pth 54.2 56.9 40.6 Google / Baidu (acc: pico)

SePiCo Evaluation

Evaluation on Cityscapes

To evaluate the pretrained models on Cityscapes, please run as follows:

python -m tools.test /path/to/config /path/to/checkpoint --eval mIoU
Example

For example, if you download sepico_distcl_gta2city_dlv2.pth along with its config json file sepico_distcl_gta2city_dlv2.json into folder ./checkpoints/sepico_distcl_gta2city_dlv2/, then the evaluation script should be like:

python -m tools.test ./checkpoints/sepico_distcl_gta2city_dlv2/sepico_distcl_gta2city_dlv2.json ./checkpoints/sepico_distcl_gta2city_dlv2/sepico_distcl_gta2city_dlv2.pth --eval mIoU

Evaluation on Dark Zurich

To evaluate on Dark Zurich, please get label predictions as follows and submit them to the official test server.

Get label predictions for the test set locally:

python -m tools.test /path/to/config /path/to/checkpoint --format-only --eval-options imgfile_prefix=/path/to/labelTrainIds
Example

For example, if you download sepico_distcl_city2dark_daformer.pth along with its config json file sepico_distcl_city2dark_daformer.json into folder ./checkpoints/sepico_distcl_city2dark_daformer/, then the evaluation script should be like:

python -m tools.test ./checkpoints/sepico_distcl_city2dark_daformer/sepico_distcl_city2dark_daformer.json ./checkpoints/sepico_distcl_city2dark_daformer/sepico_distcl_city2dark_daformer.pth  --format-only --eval-options imgfile_prefix=dark_test/distcl_daformer/labelTrainIds

Note that the test server only accepts submission with the following directory structure:

submit.zip
โ”œโ”€โ”€ confidence
โ”œโ”€โ”€ labelTrainIds
โ”œโ”€โ”€ labelTrainIds_invalid

So we need to construct the confidence and labelTrainIds_invalid directory by hand (as they are not necessary to SePiCo evaluation).

Our practice is listed below for reference (check the example above for directory name):

cd dark_test/distcl_daformer
cp -r labelTrainIds labelTrainIds_invalid
cp -r labelTrainIds confidence
zip -q -r sepico_distcl_city2dark_daformer.zip labelTrainIds labelTrainIds_invalid confidence
# Now submit sepico_distcl_city2dark_daformer.zip to the test server for results.

SePiCo Training

To begin with, download SegFormer's official MiT-B5 weights (i.e., mit_b5.pth) pretrained on ImageNet-1k from here and put it into a new folder ./pretrained.

The training entrance is at run_experiments.py. To examine the setting for a specific task, please take a look at experiments.py for more details. Generally, the training script is given as:

python run_experiments.py --exp <exp_id>

Tasks 1~6 are run on GTAV โ†’ Cityscapes, and the mapping between <exp_id> and tasks is:

<exp_id> variant backbone feature
1 DistCL ResNet-101 layer-4
2 BankCL ResNet-101 layer-4
3 ProtoCL ResNet-101 layer-4
4 DistCL MiT-B5 all-fusion
5 BankCL MiT-B5 all-fusion
6 ProtoCL MiT-B5 all-fusion

Tasks 7~8 are run on Cityscapes โ†’ Dark Zurich, and the mapping between <exp_id> and tasks is:

<exp_id> variant backbone feature
7 DistCL ResNet-101 layer-4
8 DistCL MiT-B5 all-fusion

After training, the models can be tested following SePiCo Evaluation. Note that the training results are located in ./work_dirs. The config filename should look like: 220827_1906_dlv2_proj_r101v1c_sepico_DistCL-reg-w1.0-start-iter3000-tau100.0-l3-w1.0_rcs0.01_cpl_self_adamw_6e-05_pmT_poly10warm_1x2_40k_gta2cs_seed76_4cc9a.json, and the model file has suffix .pth.

Tips on Code Understanding

Acknowledgments

This project is based on the following open-source projects. We thank their authors for making the source code publicly available.

Citation

If you find our work helpful, please star๐ŸŒŸ this repo and cite๐Ÿ“‘ our paper. Thanks for your support!

@article{xie2023sepico,
  title={Sepico: Semantic-guided pixel contrast for domain adaptive semantic segmentation},
  author={Xie, Binhui and Li, Shuang and Li, Mingjia and Liu, Chi Harold and Huang, Gao and Wang, Guoren},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2023},
  publisher={IEEE}
}

Contact

For help and issues associated with SePiCo, or reporting a bug, please open a [GitHub Issues], or feel free to contact [email protected].

Misc

โ†ณ Stargazers, thank you for your support!

Stargazers repo roster for @BIT-DA/SePiCo

โ†ณ Forkers, thank you for your support!

Forkers repo roster for @BIT-DA/SePiCo

More Repositories

1

RIPU

[CVPR 2022 Oral] Towards Fewer Annotations: Active Learning via Region Impurity and Prediction Uncertainty for Domain Adaptive Semantic Segmentation https://arxiv.org/abs/2111.12940
Python
139
star
2

CIRL

[CVPR 2022 Oral] Code release for "Causality Inspired Representation Learning for Domain Generalization"
Python
123
star
3

I2V-GAN

ACMMM2021 paper "I2V-GAN: Unpaired Infrared-to-Visible Video Translation"
Python
106
star
4

EADA

[AAAI 2022] Official Implementation of Active Learning for Domain Adaptation: An Energy-based Approach https://arxiv.org/abs/2112.01406
Python
76
star
5

TSA

[CVPR 2021 Oral] Code release for "Transferable Semantic Augmentation for Domain Adaptation"
Python
74
star
6

MetaSAug

[CVPR 2021] MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition
Python
61
star
7

RoTTA

[CVPR 2023] Robust Test-Time Adaptation in Dynamic Scenarios. https://arxiv.org/abs/2303.13899
Python
47
star
8

GDCAN

[TPAMI 2021] Code release for "Generalized Domain Conditioned Adaptation Network" https://arxiv.org/abs/2103.12339
Python
45
star
9

SDCA

Official Implementation of Semantic Distribution-aware Contrastive Adaptation for Semantic Segmentation https://arxiv.org/abs/2105.05013
Python
37
star
10

BCDM

[AAAI 2021] Code release for "Bi-Classifier Determinacy Maximization for Unsupervised Domain Adaptation" https://arxiv.org/abs/2012.06995
Python
34
star
11

DCAN

[AAAI 2020] Code release for "Domain Conditioned Adaptation Network" https://arxiv.org/abs/2005.06717
Python
33
star
12

DDA

Code release for "Dynamic Domain Adaptation for Efficient Inference" (CVPR 2021)
Python
30
star
13

SSAN

[ACMMM 2020] Code release for "Simultaneous Semantic Alignment Network for Heterogenous Domain Adaptation" https://arxiv.org/abs/2008.01677
Python
29
star
14

DUC

[ICLR 2023 Spotlight] Code release for "Dirichlet-based Uncertainty Calibration for Active Domain Adaptation"
Python
27
star
15

VBLC

[AAAI 2023 Oral] VBLC: Visibility Boosting and Logit-Constraint Learning for Domain Adaptive Semantic Segmentation under Adverse Conditions
Python
22
star
16

SCDA

[ICCV 2021] Code release for "Semantic Concentration for Domain adaptation"
Python
21
star
17

Annotator

[NeurIPS 2023] Official Implementation of A Generic Active Learning Baseline for LiDAR Semantic Segmentation
Python
21
star
18

ROMA

Display of ROMA.
Python
18
star
19

JADA

[ACM MM 2019] Code release for "Joint Adversarial Domain Adaptation" https://dl.acm.org/doi/10.1145/3343031.3351070
Python
17
star
20

BorLan

[ICCV2023] Borrowing Knowledge From Pre-trained Language Model: A New Data-efficient Visual Learning Paradigm
Python
14
star
21

CAF

[TKDE 2023 ESI Highly Cited Paper] A Collaborative Alignment Framework of Transferable Knowledge Extraction for Unsupervised Domain Adaptation
Python
10
star
22

O2net

Python
10
star
23

Transfer-Learning-Study

learning materials
9
star
24

ParetoDA

[NIPS 2021] Code release for "Pareto Domain Adaptation"
Python
9
star
25

MsRA

Code release for "End-to-End Transferable Anomaly Detection via Multi-spectral Cross-domain Representation Alignment"
Python
7
star
26

MetaReg

Python
6
star
27

Domain-Oriented-Transformer

Python
4
star
28

LSG

[NeurIPS 2023] Official Implementation of Language Semantic Graph Guided Data-Efficient Learning
Python
4
star
29

DCG

[CVPR 2023] Code release for "Improving Generalization with Domain Convex Game"
Python
3
star
30

TTSA

Code release for "Adapting Across Domains via Target-oriented Transferable Semantic Augmentation under Prototype Constraint"
Python
3
star
31

SCT

Python
2
star
32

CSDN

Official repository of "Critical Classes and Samples Discovering for Partial Domain Adaptation", IEEE Transaction on Cybernetics
Python
2
star
33

EvoS

Official implementation of our NeurIPS 2023 paper (EvoS).
Python
1
star