• Stars
    star
    609
  • Rank 73,614 (Top 2 %)
  • Language
    Python
  • Created over 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The repository containing tools and information about the WoodScape dataset.

WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving

The repository containing tools and information about the WoodScape dataset https://woodscape.valeo.com.

OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving

The repository contains a boilerplate code to encourage further research in building a unified perception model for autonomous driving.


Update (Nov 16th, 2021): Weather dataset for classification has been uploaded here

Update (Nov 8th, 2021): ChargePad dataset for object detection has been uploaded here

Update (May 20th, 2021): Scripts to generate dense polygon points for instanse segmentation are added. Precomputed boxes and polygon points (uniformly spaced) are now available for download here

Update (April 15th, 2021): Calibration files (intrinsic and extrinsic parameters) are now available in our Google Drive (link).

Information on calibration process can be found here

Update (March 5th, 2021): WoodScape paper was published at ICCV in November 2019 and we announced that the dataset was planned to be released in Q1 2020. Unfortunately, there were unexpected data protection policies required in order to comply with requirements for EU GDPR and Chinese data laws. Specifically, we had to remove one third of our dataset which was recorded in China and also employ a third party anonymization company for the remaining data. It was exacerbated by COVID situation and the subsequent economic downturn impacting the automotive sector. We apologize for the delay in the release by more than a year.

Finally, we have released the first set of tasks in our Google Drive (link). It has 8.2K images along with their corresponding 8.2K previous images needed for geometric tasks. The remaining 1.8K test samples are held out for a benchmark. It currently has annotations for semantic segmentation, instance segmentation, motion segmentation and 2D bounding boxes. Soiling Detection and end-to-end driving prediction tasks will be released by March 15th, 2021. Sample scripts to use the data will be updated in the github shortly as well. Once this first set of tasks is complete and tested, additional tasks will be gradually added. The upcoming website will include an overview about the status of the additional tasks.

Despite the delay we still believe the dataset is unique in the field. Therefore we understand that this dataset has been long awaited by many researchers. We hope that an eco-system of research in multitask fisheye camera development will thrive based on this dataset. We will continue to bugfix, support and develop the dataset and therefore any feedback will be taken onboard.

Demo

Please click on the image below for a teaser video showing annotated examples and sample results.

Dataset Contents

This dataset version consists of 10K images with annotations for 7 tasks.

  • RGB images
  • Semantic segmentation
  • 2D bounding boxes
  • Instance segmentation
  • Motion segmentation
  • Previous images
  • CAN information
  • Lens soiling data and annotations
  • Calibration Information
  • Dense polygon points for objects

Coming Soon:

  • Fisheye sythetic data with semantic annotations
  • Lidar and dGPS scenes

Data organization

woodscape
β”‚   README.md    
β”‚
└───rgb_images
β”‚   β”‚   00001_[CAM].png
β”‚   β”‚   00002_[CAM].png
|   |   ...
β”‚   β”‚
└───previous_images
β”‚   β”‚   00001_[CAM]_prev.png
β”‚   β”‚   00002_[CAM]_prev.png
|   |   ...
β”‚   β”‚
└───semantic_annotations
        β”‚   rgbLabels
        β”‚   β”‚   00001_[CAM].png
        β”‚   β”‚   00002_[CAM].png
        |   |   ...
        β”‚   gtLabels
        β”‚   β”‚   00001_[CAM].png
        β”‚   β”‚   00002_[CAM].png
        |   |   ...
β”‚   β”‚
└───box_2d_annotations
β”‚   β”‚   00001_[CAM].png
β”‚   β”‚   00002_[CAM].png
|   |   ...
β”‚   β”‚
└───instance_annotations
β”‚   β”‚   00001_[CAM].json
β”‚   β”‚   00002_[CAM].json
|   |   ...
β”‚   β”‚
└───motion_annotations
        β”‚   rgbLabels
        β”‚   β”‚   00001_[CAM].png
        β”‚   β”‚   00002_[CAM].png
        |   |   ...
        β”‚   gtLabels
        β”‚   β”‚   00001_[CAM].png
        β”‚   β”‚   00002_[CAM].png
        |   |   ...
β”‚   β”‚
└───vehicle_data
β”‚   β”‚   00001_[CAM].json
β”‚   β”‚   00002_[CAM].json
|   |   ...
β”‚   β”‚
β”‚   β”‚
└───calibration_data
β”‚   β”‚   00001_[CAM].json
β”‚   β”‚   00002_[CAM].json
|   |   ...
β”‚   β”‚
└───soiling_dataset
        β”‚   rgb_images
        β”‚   β”‚   00001_[CAM].png
        β”‚   β”‚   00002_[CAM].png
        |   |   ...
        β”‚   gt_labels
        β”‚   β”‚   00001_[CAM].png
        β”‚   β”‚   00002_[CAM].png
        |   |   ...
        β”‚   gt_labels
        β”‚   β”‚   00001_[CAM].png
        β”‚   β”‚   00002_[CAM].png
        |   |   ...

[CAM] :

FV --> Front CAM

RV --> Rear CAM

MVL --> Mirror Left CAM

MVR --> Mirror Right CAM

Annotation Information

  • Instance annotations are provided for more than 40 classes as polygons in json format. A full list of classes can be found in "/scripts/mappers/class_names.json"

  • We provide semantic segmentation annotations for 10 classes: void, road, lanes, curbs, rider, person, vehicles, bicycle, motorcycle and traffic_sign. You can generate the segmentation annotations for all the 40+ classes using the provided scripts. See the examples, For 3(+void) classes: "scripts/configs/semantic_mapping_3_classes.json" For 9(+void) classes: "scripts/configs/semantic_mapping_9_classes.json"

  • We provide 2D boxes for 5 classes: pedestrians, vehicles, bicycle, traffic lights and traffic sign. You can generate the 2D boxes for 14+ classes using the provided scripts. See the example, For 5 classes: "scripts/configs/box_2d_mapping_5_classes.json"

    • We also provide dense polygon points for the above 5 classes. These dense uniform points can be used for generating instanse masks.
  • Motion annotations are available for 19 classes. A full list of classes, indexes and colour coding can be found in motion_class_mapping.json

Installation

Use the package manager pip to install the required packages.

pip install numpy
pip install opencv-python
pip install tqdm
pip install shapely
pip install Pillow
pip install matplotlib

In windows shapely might raise polygon OSError: [WinError 126], use conda distribution as an alternative or install directly from .whl

Usage

To generate segmenatic or 2D boxes or dense polygon points for more additional classes. Please use the following scripts

semantic_map_generator.py: Generate the semantic segmentation annotations from json instance annotations

python semantic_map_generator.py --src_path [DATASET DIR]/data/instance_annotations/ --dst_path [DATASET DIR]/data/semantic_annotations --semantic_class_mapping [DATASET DIR]/scripts/configs/semantic_mapping_9_classes.json --instance_class_mapping [DATASET DIR]/scripts/mappers/class_names.json

box_2d_generator.py: Generates the 2D boxes from json instance annotations

python box_2d_generator.py --src_path [DATASET DIR]/data/instance_annotations/ --dst_path [DATASET DIR]/data/box_2d_annotations --box_2d_class_mapping [DATASET DIR]/scripts/configs/box_2d_mapping_5_classes.json --instance_class_mapping [DATASET DIR]/scripts/mappers/class_names.json --rgb_image_path [DATASET DIR]/data/rgb_images

polygon_generator.py: Generates the dense polygon points from json instance annotations

python polygon_generator.py --src_path [DATASET DIR]/data/instance_annotations/ --dst_path [DATASET DIR]/data/polygon_annotations --box_2d_class_mapping [DATASET DIR]/scripts/configs/box_2d_mapping_5_classes.json --instance_class_mapping [DATASET DIR]/scripts/mappers/class_names.json --rgb_image_path [DATASET DIR]/data/rgb_images

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License for the code

MIT

License for the data

Proprietary

Paper

WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving
Senthil Yogamani, Ciaran Hughes, Jonathan Horgan, Ganesh Sistu, Padraig Varley, Derek O'Dea, Michal Uricar, Stefan Milz, Martin Simon, Karl Amende, Christian Witt, Hazem Rashed, Sumanth Chennupati, Sanjaya Nayak, Saquib Mansoor, Xavier Perroton, Patrick Perez
Valeo
IEEE International Conference on Computer Vision (ICCV), 2019 (Oral)

If you find our dataset useful, please cite our paper:

@article{yogamani2019woodscape,
  title={WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving},
  author={Yogamani, Senthil and Hughes, Ciar{\'a}n and Horgan, Jonathan and Sistu, Ganesh and Varley, Padraig and O'Dea, Derek and Uric{\'a}r, Michal and Milz, Stefan and Simon, Martin and Amende, Karl and others},
  journal={arXiv preprint arXiv:1905.01489},
  year={2019}
}

More Repositories

1

ADVENT

Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation
Python
379
star
2

LOST

Pytorch implementation of LOST unsupervised object discovery method
Python
234
star
3

xmuda

Cross-Modal Unsupervised Domain Adaptationfor 3D Semantic Segmentation
Python
192
star
4

ZS3

Zero-Shot Semantic Segmentation
Python
187
star
5

POCO

Python
185
star
6

SLidR

Official PyTorch implementation of "Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data"
Python
177
star
7

ALSO

ALSO: Automotive Lidar Self-supervision by Occupancy estimation
Python
166
star
8

ConfidNet

Addressing Failure Prediction by Learning Model Confidence
Python
163
star
9

RADIal

Jupyter Notebook
160
star
10

Maskgit-pytorch

Jupyter Notebook
148
star
11

BF3S

Boosting Few-Shot Visual Learning with Self-Supervision
Python
136
star
12

DADA

Depth-aware Domain Adaptation in Semantic Segmentation
Python
114
star
13

FLOT

FLOT: Scene Flow Estimation by Learned Optimal Transport on Point Clouds
Python
96
star
14

obow

Python
95
star
15

carrada_dataset

Jupyter Notebook
85
star
16

rangevit

Python
77
star
17

PointBeV

Official implementation of PointBeV: A Sparse Approach to BeV Predictions
Python
77
star
18

rainbow-iqn-apex

Distributed Rainbow-IQN for Atari
Python
76
star
19

BEVContrast

BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds - Official PyTorch implementation
Python
68
star
20

FOUND

PyTorch code for Unsupervised Object Localization: Observing the Background to Discover Objects
Python
66
star
21

Awesome-Unsupervised-Object-Localization

Curated list of awesome works on unsupervised object localization in 2D images.
66
star
22

LightConvPoint

Python
64
star
23

MVRSS

Python
59
star
24

WaffleIron

Python
43
star
25

FKAConv

Python
42
star
26

LaRa

LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation
Python
41
star
27

SALUDA

Public repository for the 3DV 2024 spotlight paper "SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation".
Python
38
star
28

ScaLR

PyTorch code and models for ScaLR image-to-lidar distillation method
Python
34
star
29

3DGenZ

Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"
Python
33
star
30

obsnet

Python
32
star
31

BUDA

Boundless Unsupervised Domain Adaptation in Semantic Segmentation
32
star
32

SemanticPalette

Semantic Palette: Guiding Scene Generation with Class Proportions
Python
29
star
33

xmuda_journal

[TPAMI] Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation
Python
29
star
34

GenVal

Reliability in Semantic Segmentation: Can We Use Synthetic Data? (ECCV 2024)
Jupyter Notebook
29
star
35

PCAM

Python
28
star
36

NeeDrop

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping
Python
27
star
37

MTAF

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Python
23
star
38

ESL

ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation
Python
19
star
39

STEEX

STEEX: Steering Counterfactual Explanations with Semantics
Python
18
star
40

TTYD

Public repository for the ECCV 2024 paper "Train Till You Drop: Towards Stable and Robust Source-free Unsupervised 3D Domain Adaptation".
Python
18
star
41

OCTET

Python
17
star
42

CAB

Python
16
star
43

Occfeat

16
star
44

MuHDi

Official PyTorch implementation of "Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation"
Python
15
star
45

diffhpe

Official code of "DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion"
Python
14
star
46

bravo_challenge

BRAVO Challenge Toolkit and Evaluation Code
Python
14
star
47

sfrik

Official code for "Self-supervised learning with rotation-invariant kernels"
Python
12
star
48

BEEF

Python
11
star
49

MFEval

[ICRA2024] Towards Motion Forecasting with Real-World Perception Inputs: Are End-to-End Approaches Competitive? This is the official implementation of the evaluation protocol proposed in this work for motion forecasting models with real-world perception inputs.
Python
10
star
50

MOCA

MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments
Python
9
star
51

SP4ASC

Python
7
star
52

bownet

Learning Representations by Predicting Bags of Visual Words
7
star
53

QuEST

Python
5
star
54

UNIT

UNIT: Unsupervised Online Instance Segmentation through Time - Official PyTorch implementation
Python
5
star
55

PAFUSE

Official repository of PAFUSE
Python
5
star
56

dl_utils

The library used in the Valeo Deep learning training.
Python
3
star
57

tutorial-images

2
star
58

valeoai.github.io

JavaScript
1
star
59

MF_aWTA

This is official implementation for annealed Winner-Takes-All loss in <Annealed Winner-Takes-All for Motion Forecasting>.
1
star