• Stars
    star
    125
  • Rank 277,409 (Top 6 %)
  • Language
    Jupyter Notebook
  • Created over 2 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Watch the video

Latest news:

  • 02/11/2023: :rotating_light::construction: Due to the some issues we encounter with the storage, the data is temporarily unavailale. We are currently working on a new solution to make the dataset available again as soon as possible. Thank you for your patience and understanding. We apologize for the inconvenience.
  • 10/04/2023: Both the "ready to use" and the entire datasets are available again, we are sorry for the inconvenience.
  • 20/03/2023: Due to the some issues we encounter with the storage, the data is temporarily unavailale. We are currently working on a new solution to make the dataset available again as soon as possible. Thank you for your patience and understanding. We apologize for the inconvenience.

RADIal dataset

RADIal stands for “Radar, Lidar et al.” It's a collection of 2-hour of raw data from synchronized automotive-grade sensors (camera, laser, High Definition radar) in various environments (citystreet, highway, countryside road) and comes with GPS and vehicle’s CAN traces.

RADIal contains 91 sequences of 1 to 4 minutes in duration, for a total of 2 hours. These sequences are categorized in highway, country-side and city driving. The distribution of the sequences is indicated in the figure below. Each sequence contains raw sensor signals recorded with their native frame rate. There are approximately 25,000 frames with the three sensors synchronized, out of which 8,252 are labelled with a total of 9,550 vehicles.

If you find this code useful for your research, please cite our paper:

@InProceedings{Rebut_2022_CVPR,
               author = {Rebut, Julien and Ouaknine, Arthur and Malik, Waqas and P\'erez, Patrick},
               title = {Raw High-Definition Radar for Multi-Task Learning},
               booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
               month = {June},
               year = {2022},
               pages = {17021-17030}
               }

Sensor specifications

Central to the RADIal dataset, our high-definition radar is composed of NRx=16 receiving antennas and NTx= 12 transmitting antennas, leading to NRx·NTx= 192 virtual antennas. This virtual-antenna array enables reaching a high azimuth angular resolution while estimating objects’ elevation angles as well. As the radar signal is difficult to interpret by annotators and practitioners alike, a 16-layer automotive-grade laser scanner (LiDAR) and a 5 Mpix RGB camera are also provided. The camera is placed below the interior mirror behind the windshield while the radar and the LiDAR are installed in the middle of the front ventilation grid, one above the other. The three sensors have parallel horizontallines of sight, pointing in the driving direction. Their extrinsic parameters are provided together with the dataset. RADIal also offers synchronized GPS and CAN traces which give access to the geo-referenced position of the vehicle as well as its driving information such as speed, steering wheelangle and yaw rate. The sensors’ specifications are detailed in the table below.

Dataset structure

RADIal is a unique folder containing all the recorded sequences. Each sequence is a folder containing:

  • A preview video of the scene (low resolution);
  • The camera data compressed in MJPEG format;
  • The Laser Scanner point cloud data saved in a binary file;
  • The ADC radar data saved in a binary file. There are 4 files in total, one file for each radar chip, each chip containing 4 Rx antennas;
  • The GPS data saved in ASCII format
  • The CAN traces of the vehicle saved in binary format
  • And finally, a log file that provides the timestamp of each individual sensor event.

We provide in a Python library DBReader to read the data. Because all the radar data are recorded in a RAW format, that is to say the signal after the Analog to Digital Conversion (ADC), we provided too an optimized Python library SignalProcessing to process the Radar signal and generate either the Power Spectrums, the Point Cloud or the Range-Azimuth map.

Labels

Out of the 25,000 synchronized frames, 8,252 frames are labelled. Labels for vehicles are stored in a separated csv file. Each label containg the following information:

  • numSample: number of the current synchronized sample between all the sensors. That is to say, this label can be projected in each individual sensor with a common dataset_index value. Note that there might be more than one line with the same numSample, one line per label;
  • [x1_pix, y1_pix, x2_pix, y2_pix]: 2D coordinates of the vehicle' bouding boxes in the camera coordinates;
  • [laser_X_m, laser_Y_m, laser_Z_m]: 3D coordinates of the vehicle in the laser scanner coordinates system. Note that this 3D point is the middle of either the back or front visible face of the vehicle;
  • [radar_X_m, radar_Y_m, radar_R_m, radar_A_deg, radar_D, radar_P_db]: 2D coordinates (bird' eyes view) of the vehicle in the radar coordinates system either in cartesian (X,Y) or polar (R,A) coordinates. radar_D is the Doppler value and radar_P_db is the power of the reflected signal;
  • dataset: name of sequence it belongs to;
  • dataset_index: frame index in the current sequence;
  • Difficult: either 0 or 1

Note that -1 in all field means a frame without any label.

Labels for the Free-driving-space is provided as a segmentaion mask saved in a png file.

Download instructions

To download the raw dataset, please visit the following GoogleDrive

You will have then to use the SignalProcessing library to generate data for each modalities upon your need.

We provide too a "ready to use" dataset that can be loaded with the PyTorch data loader example provided in the Loader folder.

More Repositories

1

WoodScape

The repository containing tools and information about the WoodScape dataset.
Python
575
star
2

ADVENT

Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation
Python
373
star
3

LOST

Pytorch implementation of LOST unsupervised object discovery method
Python
226
star
4

xmuda

Cross-Modal Unsupervised Domain Adaptationfor 3D Semantic Segmentation
Python
187
star
5

ZS3

Zero-Shot Semantic Segmentation
Python
178
star
6

POCO

Python
168
star
7

SLidR

Official PyTorch implementation of "Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data"
Python
159
star
8

ConfidNet

Addressing Failure Prediction by Learning Model Confidence
Python
158
star
9

ALSO

ALSO: Automotive Lidar Self-supervision by Occupancy estimation
Python
151
star
10

BF3S

Boosting Few-Shot Visual Learning with Self-Supervision
Python
138
star
11

DADA

Depth-aware Domain Adaptation in Semantic Segmentation
Python
114
star
12

obow

Python
94
star
13

FLOT

FLOT: Scene Flow Estimation by Learned Optimal Transport on Point Clouds
Python
92
star
14

carrada_dataset

Jupyter Notebook
83
star
15

Maskgit-pytorch

Jupyter Notebook
80
star
16

rainbow-iqn-apex

Distributed Rainbow-IQN for Atari
Python
75
star
17

rangevit

Python
68
star
18

LightConvPoint

Python
61
star
19

FOUND

PyTorch code for Unsupervised Object Localization: Observing the Background to Discover Objects
Python
57
star
20

MVRSS

Python
54
star
21

Awesome-Unsupervised-Object-Localization

Curated list of awesome works on unsupervised object localization in 2D images.
51
star
22

PointBeV

Official implementation of PointBeV: A Sparse Approach to BeV Predictions
Python
48
star
23

BEVContrast

BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds - Official PyTorch implementation
Python
46
star
24

FKAConv

Python
38
star
25

SALUDA

SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation
Python
36
star
26

LaRa

LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation
Python
34
star
27

BUDA

Boundless Unsupervised Domain Adaptation in Semantic Segmentation
32
star
28

obsnet

Python
32
star
29

WaffleIron

Python
31
star
30

3DGenZ

Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"
Python
30
star
31

NeeDrop

NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds using Needle Dropping
Python
28
star
32

SemanticPalette

Semantic Palette: Guiding Scene Generation with Class Proportions
Python
27
star
33

PCAM

Python
26
star
34

xmuda_journal

[TPAMI] Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation
Python
26
star
35

MTAF

Multi-Target Adversarial Frameworks for Domain Adaptation in Semantic Segmentation
Python
23
star
36

ESL

ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation
Python
18
star
37

STEEX

STEEX: Steering Counterfactual Explanations with Semantics
Python
18
star
38

ScaLR

PyTorch code and models for ScaLR image-to-lidar distillation method
Python
18
star
39

OCTET

Python
17
star
40

CAB

Python
16
star
41

MuHDi

Official PyTorch implementation of "Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation"
Python
14
star
42

diffhpe

Official code of "DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion"
Python
12
star
43

sfrik

Official code for "Self-supervised learning with rotation-invariant kernels"
Python
12
star
44

BEEF

Python
10
star
45

SP4ASC

Python
7
star
46

bownet

Learning Representations by Predicting Bags of Visual Words
7
star
47

QuEST

Python
5
star
48

MFEval

[ICRA2024] Towards Motion Forecasting with Real-World Perception Inputs: Are End-to-End Approaches Competitive? This is the official implementation of the evaluation protocol proposed in this work for motion forecasting models with real-world perception inputs.
Python
4
star
49

dl_utils

The library used in the Valeo Deep learning training.
Python
3
star
50

tutorial-images

2
star
51

MOCA

MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments
2
star