• Stars
    star
    114
  • Rank 306,399 (Top 7 %)
  • Language
    Python
  • License
    Other
  • Created about 5 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Depth-aware Domain Adaptation in Semantic Segmentation

DADA: Depth-aware Domain Adaptation in Semantic Segmentation

Updates

  • 02/2020: Using CycleGAN translated images, The DADA model achieves (43.1%) on SYNTHIA-2-Cityscapes

Paper

DADA: Depth-aware Domain Adaptation in Semantic Segmentation
Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Matthieu Cord, Patrick Pรฉrez
valeo.ai, France
IEEE International Conference on Computer Vision (ICCV), 2019

If you find this code useful for your research, please cite our paper:

@inproceedings{vu2019dada,
  title={DADA: Depth-aware Domain Adaptation in Semantic Segmentation},
  author={Vu, Tuan-Hung and Jain, Himalaya and Bucher, Maxime and Cord, Mathieu and P{\'e}rez, Patrick},
  booktitle={ICCV},
  year={2019}
}

Abstract

Unsupervised domain adaptation (UDA) is important for applications where large scale annotation of representative data is challenging. For semantic segmentation in particular, it helps deploy on real "target domain" data models that are trained on annotated images from a different "source domain", notably a virtual environment. To this end, most previous works consider semantic segmentation as the only mode of supervision for source domain data, while ignoring other, possibly available, information like depth. In this work, we aim at exploiting at best such a privileged information while training the UDA model. We propose a unified depth-aware UDA framework that leverages in several complementary ways the knowledge of dense depth in the source domain. As a result, the performance of the trained semantic segmentation model on the target domain is boosted. Our novel approach indeed achieves state-of-the-art performance on different challenging synthetic-2-real benchmarks.

Preparation

Pre-requisites

  • Python 3.7
  • Pytorch >= 1.2.0
  • CUDA 10.0 or higher
  • The latest version of the ADVENT code.

Installation

  1. Install OpenCV if you don't already have it:
$ conda install -c menpo opencv
  1. Install ADVENT, the latest version:
$ git clone https://github.com/valeoai/ADVENT.git
$ pip install -e ./ADVENT
  1. Clone and install the repo:
$ git clone https://github.com/valeoai/DADA
$ pip install -e ./DADA

With this (the -e option of pip), you can edit the DADA code on the fly and import function and classes of DADA in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall DADA

You can take a look at the Dockerfile if you are uncertain about steps to install this project.

Datasets

By default, the datasets are put in DADA/data. We use symlinks to hook the DADA codebase to the datasets. An alternative option is to explicitlly specify the parameters DATA_DIRECTORY_SOURCE and DATA_DIRECTORY_TARGET in YML configuration files.

  • SYNTHIA: Please first follow the instructions here to download the images. In this work, we used the SYNTHIA-RAND-CITYSCAPES (CVPR16) split. The segmentation labels can be found here. The dataset directory should have this basic structure:

    DADA/data/SYNTHIA                           % SYNTHIA dataset root
    โ”œโ”€โ”€ RGB
    โ”œโ”€โ”€ parsed_LABELS
    โ””โ”€โ”€ Depth
  • Cityscapes: Please follow the instructions in Cityscape to download the images and validation ground-truths. The Cityscapes dataset directory should have this basic structure:

    DADA/data/Cityscapes                       % Cityscapes dataset root
    โ”œโ”€โ”€ leftImg8bit
    โ”‚   โ”œโ”€โ”€ train
    โ”‚   โ””โ”€โ”€ val
    โ””โ”€โ”€ gtFine
        โ””โ”€โ”€ val
  • Mapillary: Please follow the instructions in Mapillary to download the images and validation ground-truths. The Mapillary dataset directory should have this basic structure:

    DADA/data/mapillary                        % Mapillary dataset root
    โ”œโ”€โ”€ train
    โ”‚   โ””โ”€โ”€ images
    โ””โ”€โ”€ validation
        โ”œโ”€โ”€ images
        โ””โ”€โ”€ labels

Pre-trained models

Pre-trained models can be downloaded here and put in DADA/pretrained_models

Running the code

For evaluating pretrained networks, execute:

$ cd DADA/dada/scripts
$ python test.py --cfg ./<configs_dir>/dada_pretrained.yml
$ python test.py --cfg ./<configs_dir>/dada_cyclegan_pretrained.yml

<configs_dir> could be set as configs_s2c (SYNTHIA2Cityscapes) or configs_s2m (SYNTHIA2Mapillary)

Training

For the experiments done in the paper, we used pytorch 1.2.0 and CUDA 10.0. To ensure reproduction, the random seed has been fixed in the code. Still, you may need to train a few times or to train longer (by changing MAX_ITERS and EARLY_STOP) to reach the comparable performance.

By default, logs and snapshots are stored in DADA/experiments with this structure:

DADA/experiments
โ”œโ”€โ”€ logs
โ””โ”€โ”€ snapshots

To train DADA:

$ cd DADA/dada/scripts
$ python train.py --cfg ./<configs_dir>/dada.yml
$ python train.py --cfg ./<configs_dir>/dada.yml --tensorboard         % using tensorboard

To train AdvEnt baseline:

$ cd DADA/dada/scripts
$ python train.py --cfg ./<configs_dir>/advent.yml
$ python train.py --cfg ./<configs_dir>/advent.yml --tensorboard         % using tensorboard

Testing

To test DADA:

$ cd DADA/dada/scripts
$ python test.py --cfg ./<configs_dir>/dada.yml

Acknowledgements

This codebase heavily depends on AdvEnt.

License

DADA is released under the Apache 2.0 license.

More Repositories

1

WoodScape

The repository containing tools and information about the WoodScape dataset.
Python
602
star
2

ADVENT

Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation
Python
379
star
3

LOST

Pytorch implementation of LOST unsupervised object discovery method
Python
234
star
4

xmuda

Cross-Modal Unsupervised Domain Adaptationfor 3D Semantic Segmentation
Python
192
star
5

ZS3

Zero-Shot Semantic Segmentation
Python
187
star
6

POCO

Python
178
star
7

SLidR

Official PyTorch implementation of "Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data"
Python
172
star
8

ConfidNet

Addressing Failure Prediction by Learning Model Confidence
Python
162
star
9

ALSO

ALSO: Automotive Lidar Self-supervision by Occupancy estimation
Python
158
star
10

RADIal

Jupyter Notebook
155
star
11

Maskgit-pytorch

Jupyter Notebook
142
star
12

BF3S

Boosting Few-Shot Visual Learning with Self-Supervision
Python
136
star
13

FLOT

FLOT: Scene Flow Estimation by Learned Optimal Transport on Point Clouds
Python
95
star
14

obow

Python
95
star
15

carrada_dataset

Jupyter Notebook
84
star
16

rainbow-iqn-apex

Distributed Rainbow-IQN for Atari
Python
75
star
17

rangevit

Python
72
star
18

FOUND

PyTorch code for Unsupervised Object Localization: Observing the Background to Discover Objects
Python
66
star
19

PointBeV

Official implementation of PointBeV: A Sparse Approach to BeV Predictions
Python
63
star
20

LightConvPoint

Python
62
star
21

Awesome-Unsupervised-Object-Localization

Curated list of awesome works on unsupervised object localization in 2D images.
61
star
22

BEVContrast

BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds - Official PyTorch implementation
Python
60
star
23

MVRSS

Python
58
star
24

FKAConv

Python
40
star
25

SALUDA

SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation
Python
38
star
26

LaRa

LaRa: Latents and Rays for Multi-Camera Birdโ€™s-Eye-View Semantic Segmentation
Python
38
star
27

WaffleIron

Python
37
star
28

obsnet

Python
32
star
29

BUDA

Boundless Unsupervised Domain Adaptation in Semantic Segmentation
32
star
30

ScaLR

PyTorch code and models for ScaLR image-to-lidar distillation method
Python
32
star
31

3DGenZ

Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"
Python
31
star
32

SemanticPalette

Semantic Palette: Guiding Scene Generation with Class Proportions
Python
29
star
33

xmuda_journal

[TPAMI] Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation
Python
28
star
34

NeeDrop

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping
Python
27
star
35

PCAM

Python
27
star
36

MTAF

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Python
23
star
37

ESL

ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation
Python
19
star
38

STEEX

STEEX: Steering Counterfactual Explanations with Semantics
Python
18
star
39

OCTET

Python
17
star
40

CAB

Python
16
star
41

MuHDi

Official PyTorch implementation of "Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation"
Python
15
star
42

diffhpe

Official code of "DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion"
Python
14
star
43

bravo_challenge

BRAVO Challenge Toolkit and Evaluation Code
Python
14
star
44

sfrik

Official code for "Self-supervised learning with rotation-invariant kernels"
Python
12
star
45

BEEF

Python
11
star
46

SP4ASC

Python
7
star
47

bownet

Learning Representations by Predicting Bags of Visual Words
7
star
48

QuEST

Python
5
star
49

MFEval

[ICRA2024] Towards Motion Forecasting with Real-World Perception Inputs: Are End-to-End Approaches Competitive? This is the official implementation of the evaluation protocol proposed in this work for motion forecasting models with real-world perception inputs.
Python
5
star
50

Occfeat

5
star
51

dl_utils

The library used in the Valeo Deep learning training.
Python
3
star
52

tutorial-images

2
star
53

PAFUSE

Official repository of PAFUSE
Python
2
star
54

MOCA

MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments
2
star
55

valeoai.github.io

JavaScript
1
star