• Stars
    star
    233
  • Rank 172,230 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2022] ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++

This is the official PyTorch implementation of our CVPR 2022 paper:

ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation
Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao
In Conference on Computer Vision and Pattern Recognition (CVPR), 2022

We have another simple yet stronger end-to-end framework UniMatch accepted by CVPR 2023:

Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation [Code]
Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi
In Conference on Computer Vision and Pattern Recognition (CVPR), 2023

Getting Started

Data Preparation

Pre-trained Model

ResNet-50 | ResNet-101 | DeepLabv2-ResNet-101

Dataset

Pascal JPEGImages | Pascal SegmentationClass | Cityscapes leftImg8bit | Cityscapes gtFine

File Organization

├── ./pretrained
    ├── resnet50.pth
    ├── resnet101.pth
    └── deeplabv2_resnet101_coco_pretrained.pth
    
├── [Your Pascal Path]
    ├── JPEGImages
    └── SegmentationClass
    
├── [Your Cityscapes Path]
    ├── leftImg8bit
    └── gtFine

Training and Testing

export semi_setting='pascal/1_8/split_0'

CUDA_VISIBLE_DEVICES=0,1 python -W ignore main.py \
  --dataset pascal --data-root [Your Pascal Path] \
  --batch-size 16 --backbone resnet50 --model deeplabv3plus \
  --labeled-id-path dataset/splits/$semi_setting/labeled.txt \
  --unlabeled-id-path dataset/splits/$semi_setting/unlabeled.txt \
  --pseudo-mask-path outdir/pseudo_masks/$semi_setting \
  --save-path outdir/models/$semi_setting

This script is for our ST framework. To run ST++, add --plus --reliable-id-path outdir/reliable_ids/$semi_setting.

Acknowledgement

The DeepLabv2 MS COCO pre-trained model is borrowed and converted from AdvSemiSeg. The image partitions are borrowed from Context-Aware-Consistency and PseudoSeg. Part of the training hyper-parameters and network structures are adapted from PyTorch-Encoding. The strong data augmentations are borrowed from MoCo v2 and PseudoSeg.

Thanks a lot for their great works!

Citation

If you find this project useful, please consider citing:

@inproceedings{st++,
  title={ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation},
  author={Yang, Lihe and Zhuo, Wei and Qi, Lei and Shi, Yinghuan and Gao, Yang},
  booktitle={CVPR},
  year={2022}
}

@inproceedings{unimatch,
  title={Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation},
  author={Yang, Lihe and Qi, Lei and Feng, Litong and Zhang, Wayne and Shi, Yinghuan},
  booktitle={CVPR},
  year={2023}
}