VIBUS: Data-efficient 3D Scene Parsing with VIewpoint Bottleneck and Uncertainty-Spectrum Modeling
Beiwen Tian, Liyi Luo, Hao Zhao, Guyue Zhou
This repository contains implementation and checkpoints of VIBUS: Data-efficient 3D Scene Parsing with VIewpoint Bottleneck and Uncertainty-Spectrum Modeling.
Our work has been accepted by ISPRS Journal of Photogrammetry and Remote Sensing. Our paper is publicly available here.
Prepare Conda environment
The version of CUDA-Toolkit should NOT be higher than 11.1.
# Create conda environment
conda create -n vibus python=3.8
conda activate vibus
# Install MinkowskiEngine
export CUDA_HOME=/usr/local/cuda-11.1
conda install openblas-devel -c anaconda
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 \
-f https://download.pytorch.org/whl/torch_stable.html
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps \
--install-option="--blas_include_dirs=${CONDA_PREFIX}/include" \
--install-option="--blas=openblas"
# Install pointnet2 package
cd pointnet2
python setup.py install
# Install bfs package
conda install -c bioconda google-sparsehash
cd instanc_segmentation/lib/bfs/ops
python setup.py build_ext --include-dirs=${CONDA_PREFIX}/include
python setup.py install
# Install other requirements
pip install \
easydict==1.9 \
imageio==2.9.0 \
plyfile==0.7.4 \
tensorboardx==2.2 \
open3d==0.13.0 \
protobuf==3.20.0
pip install potpourri3d pymeshlab
cd SUField/
pip install -e .
Testing
Semantic Segmentation on ScanNet
You may specify the paths to datasets and checkpoints in semantic_segmentation/scannet_ss_test.sh
cd semantic_segmentation/
./scannet_ss_test.sh
Semantic Segmentation on S3DIS
You may specify the paths to datasets and checkpoints in semantic_segmentation/s3dis_ss_test.sh
cd semantic_segmentation/
./s3dis_ss_test.sh
Semantic Segmentation on Semantic3D
You may specify the paths to datasets and checkpoints in semantic_segmentation/semantic3d_ss_test.sh
cd semantic_segmentation/
./semantic3d_ss_test.sh
Instance Segmentation on ScanNet
You may specify the paths to datasets and checkpoints in instance_segmentation/scannet_is_test.sh
cd instance_segmentation/
./scannet_is_test.sh
Instance Segmentation on S3DIS
You may specify the paths to datasets and checkpoints in instance_segmentation/s3dis_is_test.sh
cd instance_segmentation/
./s3dis_is_test.sh
Visualization
-
Collect the inference results
Please change
SAVE_PATH
inscannet_ss_test_collect_pred.sh
cd semantic_segmentation/ ./scannet_ss_test_collect_pred.sh
-
Run a script so that the color of the point cloud is changed according to the predictions:
cd semantic_segmentation/ python visualize.py --dataset_root /save/path/in/step/1
Viewpoint-Bottleneck Pretraining (self supervised)
cd pretrain/
./run.sh
Supervised Training / Fine-tuning
Semantic Segmentation on ScanNet
You may specify the paths to the datasets in semantic_segmentation/scannet_ss_train.sh
cd semantic_segmentation/
./scannet_ss_train.sh
Semantic Segmentation on S3DIS
You may specify the paths to the datasets in semantic_segmentation/s3dis_ss_train.sh
cd semantic_segmentation/
./s3dis_ss_train.sh
Semantic Segmentation on Semantic3D
You may specify the paths to the datasets in semantic_segmentation/semantic3d_ss_train.sh
cd semantic_segmentation/
./semantic3d_ss_train.sh
Instance Segmentation on ScanNet
You may specify the paths to the datasets in instance_segmentation/scannet_is_train.sh
cd instance_segmentation/
./scannet_is_train.sh
Instance Segmentation on S3DIS
You may specify the paths to the datasets in instance_segmentation/s3dis_is_train.sh
cd instance_segmentation/
./s3dis_is_train.sh
Perform Spectral / Uncertainty Filtering (on ScanNet)
Spectral
-
Collect the inference results
Please change
SAVE_PATH
inscannet_ss_test_collect_pred.sh
cd semantic_segmentation/ ./scannet_ss_test_collect_pred.sh
-
Perform Spectrum Filtering
Please pass
SAVE_PATH
in step 1 as param for--dataset_root
.cd semantic_segmentation/ python fit.py --action spectrum --dataset_root /path/to/last/save/root --save_root /path/to/save/filtered/dataset
-
Use filtered dataset with pseudo labels to fine-tune model
Please change
DATASET_PATH
to the save path for filtered dataset in step 2 inscannet_ss_train.sh
.cd semantic_segmentation/ ./scannet_ss_train.sh
Uncertainty
-
Collect the inference results
Please change
SAVE_PATH
inscannet_ss_test_collect_pred_unc.sh
cd semantic_segmentation/ ./scannet_ss_test_collect_pred_unc.sh
-
Perform Spectrum Filtering
Please pass
SAVE_PATH
in step 1 as param for--stat_root
.cd semantic_segmentation/ python fit.py --action uncertainty --dataset_root /path/to/original/dataset --stat_root /path/to/last/save/root --save_root /path/to/save/filtered/dataset
-
Use filtered dataset with pseudo labels to fine-tune model
Please change
DATASET_PATH
to the save path for filtered dataset in step 2 inscannet_ss_train.sh
.cd semantic_segmentation/ ./scannet_ss_train.sh
Model Zoo
Viewpoint Bottleneck (VIB) Self-Supervised Pretrain
Dataset | Task |
---|---|
ScanNet | Google Drive |
S3DIS | Google Drive |
Semantic3D | Google Drive |
Final Checkpoints
Dataset | Supervision | Task | ||
---|---|---|---|---|
Semantic Segmentation | Instance Segmentation | |||
ScanNet | Limited Annotations | 20 pts. | Google Drive | Google Drive |
50 pts. | Google Drive | Google Drive | ||
100 pts. | Google Drive | Google Drive | ||
200 pts. | Google Drive | Google Drive | ||
Limited Reconstructions | 1% | Google Drive | Google Drive | |
5% | Google Drive | Google Drive | ||
10% | Google Drive | Google Drive | ||
20% | Google Drive | Google Drive | ||
Full | Google Drive | Google Drive | ||
S3DIS | Limited Annotations | 20 pts. | Google Drive | Google Drive |
50 pts. | Google Drive | Google Drive | ||
100 pts. | Google Drive | Google Drive | ||
200 pts. | Google Drive | Google Drive | ||
Full | Google Drive | Google Drive | ||
Semantic3D | Limited Annotations | 20 pts. | Google Drive | N/A |
50 pts. | Google Drive | |||
100 pts. | Google Drive | |||
200 pts. | Google Drive | |||
Full | Google Drive |