• Stars
    star
    461
  • Rank 91,414 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created over 5 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Semantic3D segmentation with Open3D and PointNet++

Semantic3D semantic segmentation with Open3D and PointNet++

Intro

Demo project for Semantic3D (semantic-8) segmentation with Open3D and PointNet++. The purpose of this project is to showcase the usage of Open3D in deep learning pipelines and provide a clean baseline implementation for semantic segmentation on Semantic3D dataset. Here's our entry on the semantic-8 test benchmark page.

Open3D is an open-source library that supports rapid development of software that deals with 3D data. The Open3D frontend exposes a set of carefully selected data structures and algorithms in both C++ and Python. The backend is highly optimized and is set up for parallelization. We welcome contributions from the open-source community.

In this project, Open3D was used for

  • Point cloud data loading, writing, and visualization. Open3D provides efficient implementations of various point cloud manipulation methods.
  • Data pre-processing, in particular, voxel-based down-sampling.
  • Point cloud interpolation, in particular, fast nearest neighbor search for label interpolation.
  • And more.

This project is forked from Mathieu Orhan and Guillaume Dekeyser's repo, which, is forked from the original PointNet2. We thank the original authors for sharing their methods.

Usage

1. Download

Download the dataset Semantic3D and extract it by running the following commands:

cd dataset/semantic_raw.

bash download_semantic3d.sh.

Open3D-PointNet2-Semantic3D/dataset/semantic_raw
β”œβ”€β”€ bildstein_station1_xyz_intensity_rgb.labels
β”œβ”€β”€ bildstein_station1_xyz_intensity_rgb.txt
β”œβ”€β”€ bildstein_station3_xyz_intensity_rgb.labels
β”œβ”€β”€ bildstein_station3_xyz_intensity_rgb.txt
β”œβ”€β”€ ...

2. Convert txt to pcd file

Run

python preprocess.py

Open3D is able to read .pcd files much more efficiently.

Open3D-PointNet2-Semantic3D/dataset/semantic_raw
β”œβ”€β”€ bildstein_station1_xyz_intensity_rgb.labels
β”œβ”€β”€ bildstein_station1_xyz_intensity_rgb.pcd (new)
β”œβ”€β”€ bildstein_station1_xyz_intensity_rgb.txt
β”œβ”€β”€ bildstein_station3_xyz_intensity_rgb.labels
β”œβ”€β”€ bildstein_station3_xyz_intensity_rgb.pcd (new)
β”œβ”€β”€ bildstein_station3_xyz_intensity_rgb.txt
β”œβ”€β”€ ...

3. Downsample

Run

python downsample.py

The downsampled dataset will be written to dataset/semantic_downsampled. Points with label 0 (unlabled) are excluded during downsampling.

Open3D-PointNet2-Semantic3D/dataset/semantic_downsampled
β”œβ”€β”€ bildstein_station1_xyz_intensity_rgb.labels
β”œβ”€β”€ bildstein_station1_xyz_intensity_rgb.pcd
β”œβ”€β”€ bildstein_station3_xyz_intensity_rgb.labels
β”œβ”€β”€ bildstein_station3_xyz_intensity_rgb.pcd
β”œβ”€β”€ ...

4. Compile TF Ops

We need to build TF kernels in tf_ops. First, activate the virtualenv and make sure TF can be found with current python. The following line shall run without error.

python -c "import tensorflow as tf"

Then build TF ops. You'll need CUDA and CMake 3.8+.

cd tf_ops
mkdir build
cd build
cmake ..
make

After compilation the following .so files shall be in the build directory.

Open3D-PointNet2-Semantic3D/tf_ops/build
β”œβ”€β”€ libtf_grouping.so
β”œβ”€β”€ libtf_interpolate.so
β”œβ”€β”€ libtf_sampling.so
β”œβ”€β”€ ...

Verify that that the TF kernels are working by running

cd .. # Now we're at Open3D-PointNet2-Semantic3D/tf_ops
python test_tf_ops.py

5. Train

Run

python train.py

By default, the training set will be used for training and the validation set will be used for validation. To train with both training and validation set, use the --train_set=train_full flag. Checkpoints will be output to log/semantic.

6. Predict

Pick a checkpoint and run the predict.py script. The prediction dataset is configured by --set. Since PointNet2 only takes a few thousand points per forward pass, we need to sample from the prediction dataset multiple times to get a good coverage of the points. Each sample contains the few thousand points required by PointNet2. To specify the number of such samples per scene, use the --num_samples flag.

python predict.py --ckpt log/semantic/best_model_epoch_040.ckpt \
                  --set=validation \
                  --num_samples=500

The prediction results will be written to result/sparse.

Open3D-PointNet2-Semantic3D/result/sparse
β”œβ”€β”€ sg27_station4_intensity_rgb.labels
β”œβ”€β”€ sg27_station4_intensity_rgb.pcd
β”œβ”€β”€ sg27_station5_intensity_rgb.labels
β”œβ”€β”€ sg27_station5_intensity_rgb.pcd
β”œβ”€β”€ ...

7. Interpolate

The last step is to interpolate the sparse prediction to the full point cloud. We use Open3D's K-NN hybrid search with specified radius.

python interpolate.py

The prediction results will be written to result/dense.

Open3D-PointNet2-Semantic3D/result/dense
β”œβ”€β”€ sg27_station4_intensity_rgb.labels
β”œβ”€β”€ sg27_station5_intensity_rgb.labels
β”œβ”€β”€ ...

8. Submission

Finally, if you're submitting to Semantic3D benchmark, we've included a handy tools to rename the submission file names.

python renamer.py

Summary of directories

  • dataset/semantic_raw: Raw Semantic3D data, .txt and .labels files. Also contains the .pcd file generated by preprocess.py.
  • dataset/semantic_downsampled: Generated from downsample.py. Downsampled data, contains .pcd and .labels files.
  • result/sparse: Generated from predict.py. Sparse predictions, contains .pcd and .labels files.
  • result/dense: Dense predictions, contains .labels files.
  • result/dense_label_colorized: Dense predictions with points colored by label type.

More Repositories

1

Open3D

Open3D: A Modern Library for 3D Data Processing
C++
10,396
star
2

MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
Python
4,041
star
3

OpenBot

OpenBot leverages smartphones as brains for low-cost robots. We have designed a small electric vehicle that costs about $50 and serves as a robot body. Our software stack for Android smartphones supports advanced robotics workloads such as person following and real-time autonomous navigation.
Swift
2,679
star
4

DPT

Dense Prediction Transformers
Python
1,794
star
5

ZoeDepth

Metric depth estimation from a single image
Jupyter Notebook
1,750
star
6

Open3D-ML

An extension of Open3D to address 3D Machine Learning tasks
Python
1,644
star
7

PhotorealismEnhancement

Code & Data for Enhancing Photorealism Enhancement
Python
1,237
star
8

MultiObjectiveOptimization

Source code for Neural Information Processing Systems (NeurIPS) 2018 paper "Multi-Task Learning as Multi-Objective Optimization"
Python
753
star
9

lang-seg

Language-Driven Semantic Segmentation
Jupyter Notebook
654
star
10

FastGlobalRegistration

Fast Global Registration
C++
489
star
11

FreeViewSynthesis

Code repository for "Free View Synthesis", ECCV 2020.
Python
262
star
12

StableViewSynthesis

Python
212
star
13

DeepLagrangianFluids

Code repository for "Lagrangian Fluid Simulation with Continuous Convolutions", ICLR 2020.
Python
187
star
14

spear

SPEAR: A Simulator for Photorealistic Embodied AI Research
C++
173
star
15

DirectFuturePrediction

Code for the paper "Learning to Act by Predicting the Future", Alexey Dosovitskiy and Vladlen Koltun, ICLR 2017
Python
152
star
16

VI-Depth

Code for Monocular Visual-Inertial Depth Estimation (ICRA 2023)
Python
139
star
17

NPHard

Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search
Python
139
star
18

redwood-3dscan

Python
100
star
19

Intseg

Interactive Image Segmentation with Latent Diversity
Python
78
star
20

TanksAndTemples

Toolbox for the TanksAndTemples benchmark website
Python
58
star
21

dcflow

Code for the paper "Accurate Optical Flow via Direct Cost Volume Processing. Jia Xu, RenΓ© Ranftl, and Vladlen Koltun. CVPR 2017"
C++
52
star
22

adaptive-surface-reconstruction

Adaptive Surface Reconstruction for 3D Data Processing
Python
48
star
23

DFE

Python
43
star
24

open3d-cmake-find-package

Find pre-installed Open3D package in CMake
C++
42
star
25

vision-for-action

Code to accompany "Does computer vision matter for action?"
Python
41
star
26

LMRS

Source code for ICLR 2020 paper: "Learning to Guide Random Search"
Python
39
star
27

open3d_downloads

Hosting Open3D test data for development use
23
star
28

Open3D-3rdparty

C
20
star
29

open3d-cmake-external-project

Use Open3D as a CMake external project
CMake
15
star
30

0shot-object-insertion

Simulation and robot code for contact-rich household object insertion (ICRA 2023).
Python
11
star
31

objects-with-lighting

8
star
32

Open3D-Viewer

C++
7
star
33

generalized-smoothing

Companion code for the ICML 2022 paper "Generalizing Gaussian Smoothing for Random Search"
Python
5
star
34

Open3D-Python-CI

Testing Open3D Python package from PyPI and Conda
4
star
35

MetaLearningTradeoffs

Source code for the NeurIPS 2020 Paper: Modeling and Optimization Trade-off in Meta-learning.
Python
4
star
36

hello-world-docker-action

Dockerfile
1
star
37

mshadow

Forked from https://github.com/dmlc/mshadow
C++
1
star