• Stars
    star
    160
  • Rank 234,703 (Top 5 %)
  • Language
    Jupyter Notebook
  • Created almost 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Watch the video

Latest news:

  • 02/11/2023: :rotating_light::construction: Due to the some issues we encounter with the storage, the data is temporarily unavailale. We are currently working on a new solution to make the dataset available again as soon as possible. Thank you for your patience and understanding. We apologize for the inconvenience.
  • 10/04/2023: Both the "ready to use" and the entire datasets are available again, we are sorry for the inconvenience.
  • 20/03/2023: Due to the some issues we encounter with the storage, the data is temporarily unavailale. We are currently working on a new solution to make the dataset available again as soon as possible. Thank you for your patience and understanding. We apologize for the inconvenience.

RADIal dataset

RADIal stands for “Radar, Lidar et al.” It's a collection of 2-hour of raw data from synchronized automotive-grade sensors (camera, laser, High Definition radar) in various environments (citystreet, highway, countryside road) and comes with GPS and vehicle’s CAN traces.

RADIal contains 91 sequences of 1 to 4 minutes in duration, for a total of 2 hours. These sequences are categorized in highway, country-side and city driving. The distribution of the sequences is indicated in the figure below. Each sequence contains raw sensor signals recorded with their native frame rate. There are approximately 25,000 frames with the three sensors synchronized, out of which 8,252 are labelled with a total of 9,550 vehicles.

If you find this code useful for your research, please cite our paper:

@InProceedings{Rebut_2022_CVPR,
               author = {Rebut, Julien and Ouaknine, Arthur and Malik, Waqas and P\'erez, Patrick},
               title = {Raw High-Definition Radar for Multi-Task Learning},
               booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
               month = {June},
               year = {2022},
               pages = {17021-17030}
               }

Sensor specifications

Central to the RADIal dataset, our high-definition radar is composed of NRx=16 receiving antennas and NTx= 12 transmitting antennas, leading to NRx·NTx= 192 virtual antennas. This virtual-antenna array enables reaching a high azimuth angular resolution while estimating objects’ elevation angles as well. As the radar signal is difficult to interpret by annotators and practitioners alike, a 16-layer automotive-grade laser scanner (LiDAR) and a 5 Mpix RGB camera are also provided. The camera is placed below the interior mirror behind the windshield while the radar and the LiDAR are installed in the middle of the front ventilation grid, one above the other. The three sensors have parallel horizontallines of sight, pointing in the driving direction. Their extrinsic parameters are provided together with the dataset. RADIal also offers synchronized GPS and CAN traces which give access to the geo-referenced position of the vehicle as well as its driving information such as speed, steering wheelangle and yaw rate. The sensors’ specifications are detailed in the table below.

Dataset structure

RADIal is a unique folder containing all the recorded sequences. Each sequence is a folder containing:

  • A preview video of the scene (low resolution);
  • The camera data compressed in MJPEG format;
  • The Laser Scanner point cloud data saved in a binary file;
  • The ADC radar data saved in a binary file. There are 4 files in total, one file for each radar chip, each chip containing 4 Rx antennas;
  • The GPS data saved in ASCII format
  • The CAN traces of the vehicle saved in binary format
  • And finally, a log file that provides the timestamp of each individual sensor event.

We provide in a Python library DBReader to read the data. Because all the radar data are recorded in a RAW format, that is to say the signal after the Analog to Digital Conversion (ADC), we provided too an optimized Python library SignalProcessing to process the Radar signal and generate either the Power Spectrums, the Point Cloud or the Range-Azimuth map.

Labels

Out of the 25,000 synchronized frames, 8,252 frames are labelled. Labels for vehicles are stored in a separated csv file. Each label containg the following information:

  • numSample: number of the current synchronized sample between all the sensors. That is to say, this label can be projected in each individual sensor with a common dataset_index value. Note that there might be more than one line with the same numSample, one line per label;
  • [x1_pix, y1_pix, x2_pix, y2_pix]: 2D coordinates of the vehicle' bouding boxes in the camera coordinates;
  • [laser_X_m, laser_Y_m, laser_Z_m]: 3D coordinates of the vehicle in the laser scanner coordinates system. Note that this 3D point is the middle of either the back or front visible face of the vehicle;
  • [radar_X_m, radar_Y_m, radar_R_m, radar_A_deg, radar_D, radar_P_db]: 2D coordinates (bird' eyes view) of the vehicle in the radar coordinates system either in cartesian (X,Y) or polar (R,A) coordinates. radar_D is the Doppler value and radar_P_db is the power of the reflected signal;
  • dataset: name of sequence it belongs to;
  • dataset_index: frame index in the current sequence;
  • Difficult: either 0 or 1

Note that -1 in all field means a frame without any label.

Labels for the Free-driving-space is provided as a segmentaion mask saved in a png file.

Download instructions

To download the raw dataset, please visit the following GoogleDrive

You will have then to use the SignalProcessing library to generate data for each modalities upon your need.

We provide too a "ready to use" dataset that can be loaded with the PyTorch data loader example provided in the Loader folder.

More Repositories

1

WoodScape

The repository containing tools and information about the WoodScape dataset.
Python
609
star
2

ADVENT

Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation
Python
379
star
3

LOST

Pytorch implementation of LOST unsupervised object discovery method
Python
234
star
4

xmuda

Cross-Modal Unsupervised Domain Adaptationfor 3D Semantic Segmentation
Python
192
star
5

ZS3

Zero-Shot Semantic Segmentation
Python
187
star
6

POCO

Python
185
star
7

SLidR

Official PyTorch implementation of "Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data"
Python
177
star
8

ALSO

ALSO: Automotive Lidar Self-supervision by Occupancy estimation
Python
166
star
9

ConfidNet

Addressing Failure Prediction by Learning Model Confidence
Python
163
star
10

Maskgit-pytorch

Jupyter Notebook
148
star
11

BF3S

Boosting Few-Shot Visual Learning with Self-Supervision
Python
136
star
12

DADA

Depth-aware Domain Adaptation in Semantic Segmentation
Python
114
star
13

FLOT

FLOT: Scene Flow Estimation by Learned Optimal Transport on Point Clouds
Python
96
star
14

obow

Python
95
star
15

carrada_dataset

Jupyter Notebook
85
star
16

rangevit

Python
77
star
17

PointBeV

Official implementation of PointBeV: A Sparse Approach to BeV Predictions
Python
77
star
18

rainbow-iqn-apex

Distributed Rainbow-IQN for Atari
Python
76
star
19

BEVContrast

BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds - Official PyTorch implementation
Python
68
star
20

FOUND

PyTorch code for Unsupervised Object Localization: Observing the Background to Discover Objects
Python
66
star
21

Awesome-Unsupervised-Object-Localization

Curated list of awesome works on unsupervised object localization in 2D images.
66
star
22

LightConvPoint

Python
64
star
23

MVRSS

Python
59
star
24

WaffleIron

Python
43
star
25

FKAConv

Python
42
star
26

LaRa

LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation
Python
41
star
27

SALUDA

Public repository for the 3DV 2024 spotlight paper "SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation".
Python
38
star
28

ScaLR

PyTorch code and models for ScaLR image-to-lidar distillation method
Python
34
star
29

3DGenZ

Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"
Python
33
star
30

obsnet

Python
32
star
31

BUDA

Boundless Unsupervised Domain Adaptation in Semantic Segmentation
32
star
32

SemanticPalette

Semantic Palette: Guiding Scene Generation with Class Proportions
Python
29
star
33

xmuda_journal

[TPAMI] Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation
Python
29
star
34

GenVal

Reliability in Semantic Segmentation: Can We Use Synthetic Data? (ECCV 2024)
Jupyter Notebook
29
star
35

PCAM

Python
28
star
36

NeeDrop

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping
Python
27
star
37

MTAF

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Python
23
star
38

ESL

ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation
Python
19
star
39

STEEX

STEEX: Steering Counterfactual Explanations with Semantics
Python
18
star
40

TTYD

Public repository for the ECCV 2024 paper "Train Till You Drop: Towards Stable and Robust Source-free Unsupervised 3D Domain Adaptation".
Python
18
star
41

OCTET

Python
17
star
42

CAB

Python
16
star
43

Occfeat

16
star
44

MuHDi

Official PyTorch implementation of "Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation"
Python
15
star
45

diffhpe

Official code of "DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion"
Python
14
star
46

bravo_challenge

BRAVO Challenge Toolkit and Evaluation Code
Python
14
star
47

sfrik

Official code for "Self-supervised learning with rotation-invariant kernels"
Python
12
star
48

BEEF

Python
11
star
49

MFEval

[ICRA2024] Towards Motion Forecasting with Real-World Perception Inputs: Are End-to-End Approaches Competitive? This is the official implementation of the evaluation protocol proposed in this work for motion forecasting models with real-world perception inputs.
Python
10
star
50

MOCA

MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments
Python
9
star
51

SP4ASC

Python
7
star
52

bownet

Learning Representations by Predicting Bags of Visual Words
7
star
53

QuEST

Python
5
star
54

UNIT

UNIT: Unsupervised Online Instance Segmentation through Time - Official PyTorch implementation
Python
5
star
55

PAFUSE

Official repository of PAFUSE
Python
5
star
56

dl_utils

The library used in the Valeo Deep learning training.
Python
3
star
57

tutorial-images

2
star
58

valeoai.github.io

JavaScript
1
star
59

MF_aWTA

This is official implementation for annealed Winner-Takes-All loss in <Annealed Winner-Takes-All for Motion Forecasting>.
1
star