• Stars
    star
    172
  • Rank 221,201 (Top 5 %)
  • Language
    Python
  • License
    Other
  • Created over 2 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

LiDAR snowfall simulation

Created by Martin Hahner at the Computer Vision Lab of ETH Zurich.

Support Ukraine arXiv visitors

PapersWithCode
PapersWithCode
PapersWithCode

🌨 LiDAR Snowfall Simulation
for Robust 3D Object Detection

by Martin Hahner, Christos Sakaridis, Mario Bijelic, Felix Heide, Fisher Yu, Dengxin Dai, and Luc van Gool

📣 Oral at CVPR 2022.
Please visit our paper website for more details.

Overview

.
├── calib                     # contains the LiDAR sensor calibration file used in STF
│   └── ...
├── lib                       # contains external libraries as submodules
│   └── ...
├── splits                    # contains the splits we used for our experiments
│   └── ...
├── tools                     # contains our snowfall and wet ground simulation code
│   ├── snowfall
│   │   ├── geometry.py
│   │   ├── precompute.py
│   │   ├── sampling.py
│   │   └── simulation.py
│   └── wet_ground
│       ├── augmentation.py
│       ├── phy_equations.py
│       ├── planes.py
│       └── utils.py
├── .gitignore
├── .gitmodules
├── LICENSE
├── pointcloud_viewer.py      # to visualize LiDAR point clouds and apply various augmentations
├── README.md
└── teaser.gif

Datasets supported by pointcloud_viewer.py:

Note:
The snowfall and wet ground simulation is only tested on the SeeingThroughFog (STF) dataset.

To support other datasets as well, code changes are required.

License

This software is made available for non-commercial use under a Creative Commons License.
A summary of the license can be found here.

Citation(s)

If you find this work useful, please consider citing our paper.

@inproceedings{HahnerCVPR22,
  author = {Hahner, Martin and Sakaridis, Christos and Bijelic, Mario and Heide, Felix and Yu, Fisher and Dai, Dengxin and Van Gool, Luc},
  title = {{LiDAR Snowfall Simulation for Robust 3D Object Detection}},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2022},
}

You may also want to check out our earlier work
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather.

@inproceedings{HahnerICCV21,
  author = {Hahner, Martin and Sakaridis, Christos and Dai, Dengxin and Van Gool, Luc},
  title = {{Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather}},
  booktitle = {IEEE International Conference on Computer Vision (ICCV)},
  year = {2021},
}

Getting Started

Setup

  1. Install anaconda.

  2. Execute the following commands.

# Create a new conda environment.
conda create --name snowy_lidar python=3.9 -y

# Activate the newly created conda environment.
conda activate snowy_lidar

# Install dependencies.
conda install matplotlib pandas plyfile pyaml pyopengl pyqt pyqtgraph scipy scikit-learn tqdm -c conda-forge -y
pip install PyMieScatt pyquaternion

# Clone this repository (including submodules!).
git clone [email protected]:SysCV/LiDAR_snow_sim.git --recursive
cd LiDAR_snow_sim
  1. If you want to use our precomputed snowflake patterns, you can download them (2.3GB) as mentioned below.
wget https://www.trace.ethz.ch/publications/2022/lidar_snow_simulation/snowflakes.zip
unzip snowflakes.zip
rm snowflakes.zip
  1. If you want to use DROR as well,
    you need to install PCL or download the point indices (215MB) as mentioned below.
wget https://www.trace.ethz.ch/publications/2022/lidar_snow_simulation/DROR.zip
unzip DROR.zip
rm DROR.zip
  1. Enjoy pointcloud_viewer.py.
python pointcloud_viewer.py
  1. If you also want to run inference on the STF dataset, a couple of extra steps are required.
    Note: For unknown reasons, this can roughly slow down the augmentation(s) by a factor of two.
# Download our checkpoints (265MB)
wget https://www.trace.ethz.ch/publications/2022/lidar_snow_simulation/experiments.zip
unzip experiments.zip
rm experiments.zip

# Install PyTorch.
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.3 -c conda-forge -c pytorch -y

# Install spconv
pip install spconv-cu113

# build pcdet
cd lib/OpenPCDet
python setup.py develop
cd ../..

Disclaimer

The code has been successfully tested on

  • Ubuntu 18.04.6 LTS + CUDA 11.3 + conda 4.13.0
  • Debian GNU/Linux 10 (buster) + conda 4.13.0
  • MacOS Big Sur 11.6.6 + conda 4.13.0

Contributions

Please feel free to suggest improvements to this repository.
We are always open to merge useful pull request.

Acknowledgments

This work is supported by Toyota via the TRACE project.

The work also received funding by the AI-SEE project with national funding from

We also thank the Federal Ministry for Economic Affairs and Energy for support within
VVM-Verification and Validation Methods for Automated Vehicles Level 4 and 5, a PEGASUS family project.

Felix Heide was supported by an NSF CAREER Award (2047359),
a Sony Young Faculty Award, and a Project X Innovation Award.

We thank Emmanouil Sakaridis for verifying our derivation of occlusion angles in our snowfall simulation.

                             

More Repositories

1

sam-hq

Segment Anything in High Quality [NeurIPS 2023]
Python
3,689
star
2

sam-pt

SAM-PT: Extending SAM to zero-shot video segmentation with point-based tracking.
Python
970
star
3

transfiner

Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022
Python
525
star
4

qd-3dt

Official implementation of Monocular Quasi-Dense 3D Object Tracking, TPAMI 2022
Python
515
star
5

qdtrack

Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)
Python
382
star
6

pcan

Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation, NeurIPS 2021 Spotlight
Python
362
star
7

MaskFreeVIS

Mask-Free Video Instance Segmentation [CVPR 2023]
Python
358
star
8

bdd100k-models

Model Zoo of BDD100K Dataset
Python
285
star
9

idisc

iDisc: Internal Discretization for Monocular Depth Estimation [CVPR 2023]
Python
279
star
10

r3d3

Python
144
star
11

P3Depth

Python
123
star
12

shift-dev

SHIFT Dataset DevKit - CVPR2022
Python
103
star
13

cascade-detr

[ICCV'23] Cascade-DETR: Delving into High-Quality Universal Object Detection
Python
92
star
14

tet

Implementation of Tracking Every Thing in the Wild, ECCV 2022
Python
69
star
15

TrafficBots

TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction. ICRA 2023. Code is now available at https://github.com/zhejz/TrafficBots
51
star
16

nutsh

A Platform for Visual Learning from Human Feedback
TypeScript
42
star
17

vmt

Video Mask Transfiner for High-Quality Video Instance Segmentation (ECCV'2022)
Jupyter Notebook
29
star
18

spc2

Instance-Aware Predictive Navigation in Multi-Agent Environments, ICRA 2021
Python
20
star
19

CISS

Unsupervised condition-level adaptation for semantic segmentation
Python
20
star
20

shift-detection-tta

This repository implements continuous test-time adaptation algorithms for object detection on the SHIFT dataset.
Python
18
star
21

vis4d

A modular library for visual 4D scene understanding
Python
17
star
22

dla-afa

Official implementation of Dense Prediction with Attentive Feature Aggregation, WACV 2023
Python
12
star
23

soccer-player

Python
8
star
24

project-template

Python
4
star
25

vis4d_cuda_ops

Cuda
3
star
26

vis4d-template

Vis4D Template.
Shell
3
star