• Stars
    star
    397
  • Rank 108,561 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds (ECCV 2022) ๐Ÿ”ฅ

PWCPWC

2DPASS

arXiv GitHub Stars visitors

This repository is for 2DPASS introduced in the following paper

Xu Yan*, Jiantao Gao*, Chaoda Zheng*, Chao Zheng, Ruimao Zhang, Shuguang Cui, Zhen Li*, "2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds", ECCV 2022 [arxiv]. image

If you find our work useful in your research, please consider citing:

@inproceedings{yan20222dpass,
  title={2dpass: 2d priors assisted semantic segmentation on lidar point clouds},
  author={Yan, Xu and Gao, Jiantao and Zheng, Chaoda and Zheng, Chao and Zhang, Ruimao and Cui, Shuguang and Li, Zhen},
  booktitle={European Conference on Computer Vision},
  pages={677--695},
  year={2022},
  organization={Springer}
}

@InProceedings{yan2022let,
      title={Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis}, 
      author={Xu Yan and Heshen Zhan and Chaoda Zheng and Jiantao Gao and Ruimao Zhang and Shuguang Cui and Zhen Li},
      year={2022},
      booktitle={NeurIPS}
}

@article{yan2023benchmarking,
  title={Benchmarking the Robustness of LiDAR Semantic Segmentation Models},
  author={Yan, Xu and Zheng, Chaoda and Li, Zhen and Cui, Shuguang and Dai, Dengxin},
  journal={arXiv preprint arXiv:2301.00970},
  year={2023}
}

News

  • 2023-04-01 We merge MinkowskiNet and official SPVCNN models from SPVNAS in our codebase. You can check these models in config/. We rename our baseline model from spvcnn.py to baseline.py.
  • 2023-03-31 We provide codes for the robustness evaluation on SemanticKITTI-C.
  • 2023-03-27 We release a model with higher performance on SemanticKITTI and codes for naive instance augmentation.
  • 2023-02-25 We release a new robustness benchmark for LiDAR semantic segmentation at SemanticKITTI-C. Welcome to test your models!

  • 2022-10-11 Our new work for cross-modal knowledge distillation is accepted at NeurIPS 2022๐Ÿ˜ƒ paper / code.
  • 2022-09-20 We release codes for SemanticKITTI single-scan and NuScenes ๐Ÿš€!
  • 2022-07-03 2DPASS is accepted at ECCV 2022 ๐Ÿ”ฅ!
  • 2022-03-08 We achieve 1st place in both single and multi-scans of SemanticKITTI and 3rd place on NuScenes-lidarseg ๐Ÿ”ฅ!

Installation

Requirements

Data Preparation

SemanticKITTI

Please download the files from the SemanticKITTI website and additionally the color data from the Kitti Odometry website. Extract everything into the same folder.

./dataset/
โ”œโ”€โ”€ 
โ”œโ”€โ”€ ...
โ””โ”€โ”€ SemanticKitti/
    โ”œโ”€โ”€sequences
        โ”œโ”€โ”€ 00/           
        โ”‚   โ”œโ”€โ”€ velodyne/	
        |   |	โ”œโ”€โ”€ 000000.bin
        |   |	โ”œโ”€โ”€ 000001.bin
        |   |	โ””โ”€โ”€ ...
        โ”‚   โ””โ”€โ”€ labels/ 
        |   |   โ”œโ”€โ”€ 000000.label
        |   |   โ”œโ”€โ”€ 000001.label
        |   |   โ””โ”€โ”€ ...
        |   โ””โ”€โ”€ image_2/ 
        |   |   โ”œโ”€โ”€ 000000.png
        |   |   โ”œโ”€โ”€ 000001.png
        |   |   โ””โ”€โ”€ ...
        |   calib.txt
        โ”œโ”€โ”€ 08/ # for validation
        โ”œโ”€โ”€ 11/ # 11-21 for testing
        โ””โ”€โ”€ 21/
	    โ””โ”€โ”€ ...

NuScenes

Please download the Full dataset (v1.0) from the NuScenes website with lidarseg and extract it.

./dataset/
โ”œโ”€โ”€ 
โ”œโ”€โ”€ ...
โ””โ”€โ”€ nuscenes/
    โ”œโ”€โ”€v1.0-trainval
    โ”œโ”€โ”€v1.0-test
    โ”œโ”€โ”€samples
    โ”œโ”€โ”€sweeps
    โ”œโ”€โ”€maps
    โ”œโ”€โ”€lidarseg

Training

SemanticKITTI

You can run the training with

cd <root dir of this repo>
python main.py --log_dir 2DPASS_semkitti --config config/2DPASS-semantickitti.yaml --gpu 0

The output will be written to logs/SemanticKITTI/2DPASS_semkitti by default.

NuScenes

cd <root dir of this repo>
python main.py --log_dir 2DPASS_nusc --config config/2DPASS-nuscenese.yaml --gpu 0 1 2 3

Vanilla Training without 2DPASS

We take SemanticKITTI as an example.

cd <root dir of this repo>
python main.py --log_dir baseline_semkitti --config config/2DPASS-semantickitti.yaml --gpu 0 --baseline_only

Testing

You can run the testing with

cd <root dir of this repo>
python main.py --config config/2DPASS-semantickitti.yaml --gpu 0 --test --num_vote 12 --checkpoint <dir for the pytorch checkpoint>

Here, num_vote is the number of views for the test-time-augmentation (TTA). We set this value to 12 as default (on a Tesla-V100 GPU), and if you use other GPUs with smaller memory, you can choose a smaller value. num_vote=1 denotes there is no TTA used, and will cause about ~2% performance drop.

Robustness Evaluation

Please download all subsets of SemanticKITTI-C from this link and extract them.

./dataset/
โ”œโ”€โ”€ 
โ”œโ”€โ”€ ...
โ””โ”€โ”€ SemanticKitti/
    โ”œโ”€โ”€sequences
    โ”œโ”€โ”€SemanticKITTI-C
        โ”œโ”€โ”€ clean_data/           
        โ”œโ”€โ”€ dense_16beam/           
        โ”‚   โ”œโ”€โ”€ velodyne/	
        |   |	โ”œโ”€โ”€ 000000.bin
        |   |	โ”œโ”€โ”€ 000001.bin
        |   |	โ””โ”€โ”€ ...
        โ”‚   โ””โ”€โ”€ labels/ 
        |   |   โ”œโ”€โ”€ 000000.label
        |   |   โ”œโ”€โ”€ 000001.label
        |   |   โ””โ”€โ”€ ...
	    ...

You can run the robustness evaluation with

cd <root dir of this repo>
python robust_test.py --config config/2DPASS-semantickitti.yaml --gpu 0  --num_vote 12 --checkpoint <dir for the pytorch checkpoint>

Model Zoo

You can download the models with the scores below from this Google drive folder.

SemanticKITTI

Model (validation) mIoU (vanilla) mIoU (TTA) Parameters
MinkowskiNet 65.1% 67.1% 21.7M
SPVCNN 65.9% 67.8% 21.8M
2DPASS (4scale-64dimension) 68.7% 70.0% 1.9M
2DPASS (6scale-256dimension) 70.7% 72.0% 45.6M

Here, we fine-tune 2DPASS models on SemanticKITTI with more epochs and thus gain the higher mIoU. If you train with 64 epochs, it should be gained about 66%/69% for vanilla and 69%/71% after TTA.

NuScenes

Model (validation) mIoU (vanilla) mIoU (TTA) Parameters
MinkowskiNet 74.3% 76.0% 21.7M
SPVCNN 74.9% 76.9% 21.8M
2DPASS (6scale-128dimension) 76.7% 79.6% 11.5M
2DPASS (6scale-256dimension) 78.0% 80.5% 45.6M

Note that the results on benchmarks are gained by training with additional validation set and using instance-level augmentation.

Acknowledgements

Code is built based on SPVNAS, Cylinder3D, xMUDA and SPCONV.

License

This repository is released under MIT License (see LICENSE file for details).