• Stars
    star
    187
  • Rank 206,464 (Top 5 %)
  • Language
    Python
  • License
    Other
  • Created about 5 years ago
  • Updated almost 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Zero-Shot Semantic Segmentation

Zero-Shot Semantic Segmentation

Paper

Zero-Shot Semantic Segmentation
Maxime Bucher, Tuan-Hung Vu , Matthieu Cord, Patrick Pérez
valeo.ai, France
Neural Information Processing Systems (NeurIPS) 2019

If you find this code useful for your research, please cite our paper:

@inproceedings{bucher2019zero,
  title={Zero-Shot Semantic Segmentation},
  author={Bucher, Maxime and Vu, Tuan-Hung and Cord, Mathieu and P{\'e}rez, Patrick},
  booktitle={NeurIPS},
  year={2019}
}

Abstract

Semantic segmentation models are limited in their ability to scale to large numbers of object classes. In this paper, we introduce the new task of zero-shot semantic segmentation: learning pixel-wise classifiers for never-seen object categories with zero training examples. To this end, we present a novel architecture, ZS3Net, combining a deep visual segmentation model with an approach to generate visual representations from semantic word embeddings. By this way, ZS3Net addresses pixel classification tasks where both seen and unseen categories are faced at test time (so called "generalized" zero-shot classification). Performance is further improved by a self-training step that relies on automatic pseudo-labeling of pixels from unseen classes. On the two standard segmentation datasets, Pascal-VOC and Pascal-Context, we propose zero-shot benchmarks and set competitive baselines. For complex scenes as ones in the Pascal-Context dataset, we extend our approach by using a graph-context encoding to fully leverage spatial context priors coming from class-wise segmentation maps.

Code

Pre-requisites

  • Python 3.6
  • Pytorch >= 1.0 or higher
  • CUDA 9.0 or higher

Installation

  1. Clone the repo:
$ git clone https://github.com/valeoai/ZS3
  1. Install this repository and the dependencies using pip:
$ pip install -e ZS3

With this, you can edit the ZS3 code on the fly and import function and classes of ZS3 in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall ZS3

You can take a look at the Dockerfile if you are uncertain about steps to install this project.

Datasets

Pascal-VOC 2012

  • Pascal-VOC 2012: Please follow the instructions here to download images and semantic segmentation annotations.

  • Semantic Boundaries Dataset: Please follow the instructions here to download images and semantic segmentation annotations. Use this train set, which excludes overlap with Pascal-VOC validation set.

The Pascal-VOC and SBD datasets directory should have this structure:

ZS3/data/VOC2012/    % Pascal VOC and SBD datasets root
ZS3/data/VOC2012/ImageSets/Segmentation/     % Pascal VOC splits
ZS3/data/VOC2012/JPEGImages/     % Pascal VOC images
ZS3/data/VOC2012/SegmentationClass/      % Pascal VOC segmentation maps
ZS3/data/VOC2012/benchmark_RELEASE/dataset/img      % SBD images
ZS3/data/VOC2012/benchmark_RELEASE/dataset/cls      % SBD segmentation maps
ZS3/data/VOC2012/benchmark_RELEASE/dataset/train_noval.txt       % SBD train set

Pascal-Context

  • Pascal-VOC 2010: Please follow the instructions here to download images.

  • Pascal-Context: Please follow the instructions here to download segmentation annotations.

The Pascal-Context dataset directory should have this structure:

ZS3/data/context/    % Pascal context dataset root
ZS3/data/context/train.txt     % Pascal context train split
ZS3/data/context/val.txt     % Pascal context val split
ZS3/data/context/full_annotations/trainval/     % Pascal context segmentation maps
ZS3/data/context/full_annotations/labels.txt     % Pascal context 459 classes
ZS3/data/context/classes-59.txt     % Pascal context 59 classes
ZS3/data/context/VOCdevkit/VOC2010/JPEGImages     % Pascal VOC images

Training

Pascal-VOC

Follow steps below to train your model:

  1. Train deeplabv3+ using Pascal VOC dataset and ResNet as backbone, pretrained on imagenet (weights here):
python train_pascal.py
  1. Train GMMN and finetune the last classification layer of the trained deeplabv3+ model:
python train_pascal_GMMN.py
  • Main options

    • imagenet_pretrained_path: Path to ImageNet pretrained weights.
    • resume: Path to deeplabv3+ weights.
    • exp_path: Path to saved logs and weights folder.
    • checkname: Name of the saved logs and weights folder.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.
  • Final deeplabv3+ and GMMN weights

Pascal-Context

Follow steps below to train your model:

  1. Train deeplabv3+ using Pascal Context dataset and ResNet as backbone, pretrained on imagenet (weights here):
python train_context.py
  1. Train GMMN and finetune the last classification layer of the trained deeplabv3+ model:
python train_context_GMMN.py
  • Main options

    • imagenet_pretrained_path: Path to ImageNet pretrained weights.
    • resume: Path to deeplabv3+ weights.
    • exp_path: Path to saved logs and weights folder.
    • checkname: Name of the saved logs and weights folder.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.
  • Final deeplabv3+ and GMMN weights

(2 bis). Train GMMN with graph context and finetune the last classification layer of the trained deeplabv3+ model:

python train_context_GMMN_GCNcontext.py
  • Main options

    • imagenet_pretrained_path: Path to ImageNet pretrained weights.
    • resume: Path to deeplabv3+ weights.
    • exp_path: Path to saved logs and weights folder.
    • checkname: Name of the saved logs and weights folder.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.
  • Final deeplabv3+ and GMMN with graph context weights

Testing

python eval_pascal.py
python eval_context.py
  • Main options
    • resume: Path to deeplabv3+ and GMMN weights.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.

Acknowledgements

License

ZS3Net is released under the Apache 2.0 license.

More Repositories

1

WoodScape

The repository containing tools and information about the WoodScape dataset.
Python
609
star
2

ADVENT

Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation
Python
379
star
3

LOST

Pytorch implementation of LOST unsupervised object discovery method
Python
234
star
4

xmuda

Cross-Modal Unsupervised Domain Adaptationfor 3D Semantic Segmentation
Python
192
star
5

POCO

Python
185
star
6

SLidR

Official PyTorch implementation of "Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data"
Python
177
star
7

ALSO

ALSO: Automotive Lidar Self-supervision by Occupancy estimation
Python
166
star
8

ConfidNet

Addressing Failure Prediction by Learning Model Confidence
Python
163
star
9

RADIal

Jupyter Notebook
160
star
10

Maskgit-pytorch

Jupyter Notebook
148
star
11

BF3S

Boosting Few-Shot Visual Learning with Self-Supervision
Python
136
star
12

DADA

Depth-aware Domain Adaptation in Semantic Segmentation
Python
114
star
13

FLOT

FLOT: Scene Flow Estimation by Learned Optimal Transport on Point Clouds
Python
96
star
14

obow

Python
95
star
15

carrada_dataset

Jupyter Notebook
85
star
16

rangevit

Python
77
star
17

PointBeV

Official implementation of PointBeV: A Sparse Approach to BeV Predictions
Python
77
star
18

rainbow-iqn-apex

Distributed Rainbow-IQN for Atari
Python
76
star
19

BEVContrast

BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds - Official PyTorch implementation
Python
68
star
20

FOUND

PyTorch code for Unsupervised Object Localization: Observing the Background to Discover Objects
Python
66
star
21

Awesome-Unsupervised-Object-Localization

Curated list of awesome works on unsupervised object localization in 2D images.
66
star
22

LightConvPoint

Python
64
star
23

MVRSS

Python
59
star
24

WaffleIron

Python
43
star
25

FKAConv

Python
42
star
26

LaRa

LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation
Python
41
star
27

SALUDA

Public repository for the 3DV 2024 spotlight paper "SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation".
Python
38
star
28

ScaLR

PyTorch code and models for ScaLR image-to-lidar distillation method
Python
34
star
29

3DGenZ

Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"
Python
33
star
30

obsnet

Python
32
star
31

BUDA

Boundless Unsupervised Domain Adaptation in Semantic Segmentation
32
star
32

SemanticPalette

Semantic Palette: Guiding Scene Generation with Class Proportions
Python
29
star
33

xmuda_journal

[TPAMI] Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation
Python
29
star
34

GenVal

Reliability in Semantic Segmentation: Can We Use Synthetic Data? (ECCV 2024)
Jupyter Notebook
29
star
35

PCAM

Python
28
star
36

NeeDrop

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping
Python
27
star
37

MTAF

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Python
23
star
38

ESL

ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation
Python
19
star
39

STEEX

STEEX: Steering Counterfactual Explanations with Semantics
Python
18
star
40

TTYD

Public repository for the ECCV 2024 paper "Train Till You Drop: Towards Stable and Robust Source-free Unsupervised 3D Domain Adaptation".
Python
18
star
41

OCTET

Python
17
star
42

CAB

Python
16
star
43

Occfeat

16
star
44

MuHDi

Official PyTorch implementation of "Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation"
Python
15
star
45

diffhpe

Official code of "DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion"
Python
14
star
46

bravo_challenge

BRAVO Challenge Toolkit and Evaluation Code
Python
14
star
47

sfrik

Official code for "Self-supervised learning with rotation-invariant kernels"
Python
12
star
48

BEEF

Python
11
star
49

MFEval

[ICRA2024] Towards Motion Forecasting with Real-World Perception Inputs: Are End-to-End Approaches Competitive? This is the official implementation of the evaluation protocol proposed in this work for motion forecasting models with real-world perception inputs.
Python
10
star
50

MOCA

MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments
Python
9
star
51

SP4ASC

Python
7
star
52

bownet

Learning Representations by Predicting Bags of Visual Words
7
star
53

QuEST

Python
5
star
54

UNIT

UNIT: Unsupervised Online Instance Segmentation through Time - Official PyTorch implementation
Python
5
star
55

PAFUSE

Official repository of PAFUSE
Python
5
star
56

dl_utils

The library used in the Valeo Deep learning training.
Python
3
star
57

tutorial-images

2
star
58

valeoai.github.io

JavaScript
1
star
59

MF_aWTA

This is official implementation for annealed Winner-Takes-All loss in <Annealed Winner-Takes-All for Motion Forecasting>.
1
star