2DPASS
This repository is for 2DPASS introduced in the following paper
Xu Yan*, Jiantao Gao*, Chaoda Zheng*, Chao Zheng, Ruimao Zhang, Shuguang Cui, Zhen Li*, "2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds", ECCV 2022 [arxiv].
If you find our work useful in your research, please consider citing:
@inproceedings{yan20222dpass,
title={2dpass: 2d priors assisted semantic segmentation on lidar point clouds},
author={Yan, Xu and Gao, Jiantao and Zheng, Chaoda and Zheng, Chao and Zhang, Ruimao and Cui, Shuguang and Li, Zhen},
booktitle={European Conference on Computer Vision},
pages={677--695},
year={2022},
organization={Springer}
}
@InProceedings{yan2022let,
title={Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis},
author={Xu Yan and Heshen Zhan and Chaoda Zheng and Jiantao Gao and Ruimao Zhang and Shuguang Cui and Zhen Li},
year={2022},
booktitle={NeurIPS}
}
@article{yan2023benchmarking,
title={Benchmarking the Robustness of LiDAR Semantic Segmentation Models},
author={Yan, Xu and Zheng, Chaoda and Li, Zhen and Cui, Shuguang and Dai, Dengxin},
journal={arXiv preprint arXiv:2301.00970},
year={2023}
}
News
- 2023-04-01 We merge MinkowskiNet and official SPVCNN models from SPVNAS in our codebase. You can check these models in
config/
. We rename our baseline model fromspvcnn.py
tobaseline.py
. - 2023-03-31 We provide codes for the robustness evaluation on SemanticKITTI-C.
- 2023-03-27 We release a model with higher performance on SemanticKITTI and codes for naive instance augmentation.
- 2023-02-25 We release a new robustness benchmark for LiDAR semantic segmentation at SemanticKITTI-C. Welcome to test your models!
- 2022-10-11 Our new work for cross-modal knowledge distillation is accepted at NeurIPS 2022๐ paper / code.
- 2022-09-20 We release codes for SemanticKITTI single-scan and NuScenes ๐!
- 2022-07-03 2DPASS is accepted at ECCV 2022 ๐ฅ!
- 2022-03-08 We achieve 1st place in both single and multi-scans of SemanticKITTI and 3rd place on NuScenes-lidarseg ๐ฅ!
Installation
Requirements
- pytorch >= 1.8
- yaml
- easydict
- pyquaternion
- lightning (tested with pytorch_lightning==1.3.8 and torchmetrics==0.5)
- torch-scatter (pip install torch-scatter -f https://data.pyg.org/whl/torch-1.9.0+${CUDA}.html)
- nuScenes-devkit (optional for nuScenes)
- spconv (tested with spconv==2.1.16 and cuda==11.1, pip install spconv-cu111==2.1.16)
- torchsparse (optional for MinkowskiNet and SPVCNN. sudo apt-get install libsparsehash-dev, pip install --upgrade git+https://github.com/mit-han-lab/[email protected])
Data Preparation
SemanticKITTI
Please download the files from the SemanticKITTI website and additionally the color data from the Kitti Odometry website. Extract everything into the same folder.
./dataset/
โโโ
โโโ ...
โโโ SemanticKitti/
โโโsequences
โโโ 00/
โ โโโ velodyne/
| | โโโ 000000.bin
| | โโโ 000001.bin
| | โโโ ...
โ โโโ labels/
| | โโโ 000000.label
| | โโโ 000001.label
| | โโโ ...
| โโโ image_2/
| | โโโ 000000.png
| | โโโ 000001.png
| | โโโ ...
| calib.txt
โโโ 08/ # for validation
โโโ 11/ # 11-21 for testing
โโโ 21/
โโโ ...
NuScenes
Please download the Full dataset (v1.0) from the NuScenes website with lidarseg and extract it.
./dataset/
โโโ
โโโ ...
โโโ nuscenes/
โโโv1.0-trainval
โโโv1.0-test
โโโsamples
โโโsweeps
โโโmaps
โโโlidarseg
Training
SemanticKITTI
You can run the training with
cd <root dir of this repo>
python main.py --log_dir 2DPASS_semkitti --config config/2DPASS-semantickitti.yaml --gpu 0
The output will be written to logs/SemanticKITTI/2DPASS_semkitti
by default.
NuScenes
cd <root dir of this repo>
python main.py --log_dir 2DPASS_nusc --config config/2DPASS-nuscenese.yaml --gpu 0 1 2 3
Vanilla Training without 2DPASS
We take SemanticKITTI as an example.
cd <root dir of this repo>
python main.py --log_dir baseline_semkitti --config config/2DPASS-semantickitti.yaml --gpu 0 --baseline_only
Testing
You can run the testing with
cd <root dir of this repo>
python main.py --config config/2DPASS-semantickitti.yaml --gpu 0 --test --num_vote 12 --checkpoint <dir for the pytorch checkpoint>
Here, num_vote
is the number of views for the test-time-augmentation (TTA). We set this value to 12 as default (on a Tesla-V100 GPU), and if you use other GPUs with smaller memory, you can choose a smaller value. num_vote=1
denotes there is no TTA used, and will cause about ~2% performance drop.
Robustness Evaluation
Please download all subsets of SemanticKITTI-C from this link and extract them.
./dataset/
โโโ
โโโ ...
โโโ SemanticKitti/
โโโsequences
โโโSemanticKITTI-C
โโโ clean_data/
โโโ dense_16beam/
โ โโโ velodyne/
| | โโโ 000000.bin
| | โโโ 000001.bin
| | โโโ ...
โ โโโ labels/
| | โโโ 000000.label
| | โโโ 000001.label
| | โโโ ...
...
You can run the robustness evaluation with
cd <root dir of this repo>
python robust_test.py --config config/2DPASS-semantickitti.yaml --gpu 0 --num_vote 12 --checkpoint <dir for the pytorch checkpoint>
Model Zoo
You can download the models with the scores below from this Google drive folder.
SemanticKITTI
Model (validation) | mIoU (vanilla) | mIoU (TTA) | Parameters |
---|---|---|---|
MinkowskiNet | 65.1% | 67.1% | 21.7M |
SPVCNN | 65.9% | 67.8% | 21.8M |
2DPASS (4scale-64dimension) | 68.7% | 70.0% | 1.9M |
2DPASS (6scale-256dimension) | 70.7% | 72.0% | 45.6M |
Here, we fine-tune 2DPASS models on SemanticKITTI with more epochs and thus gain the higher mIoU. If you train with 64 epochs, it should be gained about 66%/69% for vanilla and 69%/71% after TTA.
NuScenes
Model (validation) | mIoU (vanilla) | mIoU (TTA) | Parameters |
---|---|---|---|
MinkowskiNet | 74.3% | 76.0% | 21.7M |
SPVCNN | 74.9% | 76.9% | 21.8M |
2DPASS (6scale-128dimension) | 76.7% | 79.6% | 11.5M |
2DPASS (6scale-256dimension) | 78.0% | 80.5% | 45.6M |
Note that the results on benchmarks are gained by training with additional validation set and using instance-level augmentation.
Acknowledgements
Code is built based on SPVNAS, Cylinder3D, xMUDA and SPCONV.
License
This repository is released under MIT License (see LICENSE file for details).