• Stars
    star
    762
  • Rank 59,625 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

SemanticKITTI API for visualizing dataset, processing data, and evaluating results.

API for SemanticKITTI

This repository contains helper scripts to open, visualize, process, and evaluate results for point clouds and labels from the SemanticKITTI dataset.


Example of 3D pointcloud from sequence 13:


Example of 2D spherical projection from sequence 13:


Example of voxelized point clouds for semantic scene completion:


Data organization

The data is organized in the following format:

/kitti/dataset/
          └── sequences/
                  ├── 00/
                  │   ├── poses.txt
                  │   ├── image_2/
                  │   ├── image_3/
                  │   ├── labels/
                  │   │     ├ 000000.label
                  │   │     └ 000001.label
                  |   ├── voxels/
                  |   |     ├ 000000.bin
                  |   |     ├ 000000.label
                  |   |     ├ 000000.occluded
                  |   |     ├ 000000.invalid
                  |   |     ├ 000001.bin
                  |   |     ├ 000001.label
                  |   |     ├ 000001.occluded
                  |   |     ├ 000001.invalid
                  │   └── velodyne/
                  │         ├ 000000.bin
                  │         └ 000001.bin
                  ├── 01/
                  ├── 02/
                  .
                  .
                  .
                  └── 21/
  • From KITTI Odometry:
    • image_2 and image_3 correspond to the rgb images for each sequence.
    • velodyne contains the pointclouds for each scan in each sequence. Each .bin scan is a list of float32 points in [x,y,z,remission] format. See laserscan.py to see how the points are read.
  • From SemanticKITTI:
    • labels contains the labels for each scan in each sequence. Each .label file contains a uint32 label for each point in the corresponding .bin scan. See laserscan.py to see how the labels are read.
    • poses.txt contain the manually looped-closed poses for each capture (in the camera frame) that were used in the annotation tools to aggregate all the point clouds.
    • voxels contains all information needed for the task of semantic scene completion. Each .bin file contains for each voxel if that voxel is occupied by laser measurements in a packed binary format. This is the input to the semantic scene completion task and it corresponds to the voxelization of a single LiDAR scan. Each.label file contains for each voxel of the completed scene a label in binary format. The label is a 16-bit unsigned integer (aka uint16_t) for each voxel. .invalid and .occluded contain information about the occlusion of voxel. Invalid voxels are voxels that are occluded from each view position and occluded voxels are occluded in the first view point. See also SSCDataset.py for more information on loading the data.

The main configuration file for the data is in config/semantic-kitti.yaml. In this file you will find:

  • labels: dictionary which maps numeric labels in .label file to a string class. Example: 10: "car"
  • color_map: dictionary which maps numeric labels in .label file to a bgr color for visualization. Example 10: [245, 150, 100] # car, blue-ish
  • content: dictionary with content of each class in labels, as a ratio to the number of total points in the dataset. This can be obtained by running the ./content.py script, and is used to calculate the weights for the cross entropy in all baseline methods (in order handle class imbalance).
  • learning_map: dictionary which maps each class label to its cross entropy equivalent, for learning. This is done to mask undesired classes, map different classes together, and because the cross entropy expects a value in [0, numclasses - 1]. We also provide ./remap_semantic_labels.py, a script that uses this dictionary to put the label files in the cross entropy format, so that you can use the labels directly in your training pipeline. Examples:
      0 : 0     # "unlabeled"
      1 : 0     # "outlier" to "unlabeled" -> gets ignored in training, with unlabeled
      10: 1     # "car"
      252: 1    # "moving-car" to "car" -> gets merged with static car class
  • learning_map_inv: dictionary with inverse of the previous mapping, allows to map back the classes only to the interest ones (for saving point cloud predictions in original label format). We also provide ./remap_semantic_labels.py, a script that uses this dictionary to put the label files in the original format, when instantiated with the --inverse flag.
  • learning_ignore: dictionary that contains for each cross entropy class if it will be ignored during training and evaluation or not. For example, class unlabeled gets ignored in both training and evaluation.
  • split: contains 3 lists, with the sequence numbers for training, validation, and evaluation.

Dependencies for API:

System dependencies

$ sudo apt install python3-dev python3-pip python3-pyqt5.qtopengl # for visualization

Python dependencies

$ sudo pip3 install -r requirements.txt

Scripts:

ALL OF THE SCRIPTS CAN BE INVOKED WITH THE --help (-h) FLAG, FOR EXTRA INFORMATION AND OPTIONS.

Visualization

Point Clouds

To visualize the data, use the visualize.py script. It will open an interactive opengl visualization of the pointclouds along with a spherical projection of each scan into a 64 x 1024 image.

$ ./visualize.py --sequence 00 --dataset /path/to/kitti/dataset/

where:

  • sequence is the sequence to be accessed.
  • dataset is the path to the kitti dataset where the sequences directory is.

Navigation:

  • n is next scan,
  • b is previous scan,
  • esc or q exits.

In order to visualize your predictions instead, the --predictions option replaces visualization of the labels with the visualization of your predictions:

$ ./visualize.py --sequence 00 --dataset /path/to/kitti/dataset/ --predictions /path/to/your/predictions

To directly compare two sets of data, use the compare.py script. It will open an interactive opengl visualization of the pointcloud labels.

$ ./compare.py --sequence 00 --dataset_a /path/to/dataset_a/ --dataset_b /path/to/kitti/dataset_b/

where:

  • sequence is the sequence to be accessed.
  • dataset_a is the path to a dataset in KITTI format where the sequences directory is.
  • dataset_b is the path to another dataset in KITTI format where the sequences directory is.

Navigation:

  • n is next scan,
  • b is previous scan,
  • esc or q exits.

Voxel Grids for Semantic Scene Completion

To visualize the data, use the visualize_voxels.py script. It will open an interactive opengl visualization of the voxel grids and options to visualize the provided voxelizations of the LiDAR data.

$ ./visualize_voxels.py --sequence 00 --dataset /path/to/kitti/dataset/

where:

  • sequence is the sequence to be accessed.
  • dataset is the path to the kitti dataset where the sequences directory is.

Navigation:

  • n is next scan,
  • b is previous scan,
  • esc or q exits.

Note: Holding the forward/backward buttons triggers the playback mode.

LiDAR-based Moving Object Segmentation (LiDAR-MOS)

To visualize the data, use the visualize_mos.py script. It will open an interactive opengl visualization of the voxel grids and options to visualize the provided voxelizations of the LiDAR data.

$ ./visualize_mos.py --sequence 00 --dataset /path/to/kitti/dataset/

where:

  • sequence is the sequence to be accessed.
  • dataset is the path to the kitti dataset where the sequences directory is.

Navigation:

  • n is next scan,
  • b is previous scan,
  • esc or q exits.

Note: Holding the forward/backward buttons triggers the playback mode.

Evaluation

To evaluate the predictions of a method, use the evaluate_semantics.py to evaluate semantic segmentation, evaluate_completion.py to evaluate the semantic scene completion and evaluate_panoptic.py to evaluate panoptic segmentation. Important: The labels and the predictions need to be in the original label format, which means that if a method learns the cross-entropy mapped classes, they need to be passed through the learning_map_inv dictionary to be sent to the original dataset format. This is to prevent changes in the dataset interest classes from affecting intermediate outputs of approaches, since the original labels will stay the same. For semantic segmentation, we provide the remap_semantic_labels.py script to make this shift before the training, and once again before the evaluation, selecting which are the interest classes in the configuration file. The data needs to be either:

  • In a separate directory with this format:

    /method_predictions/
              └── sequences
                  ├── 00
                  │   └── predictions
                  │         ├ 000000.label
                  │         └ 000001.label
                  ├── 01
                  ├── 02
                  .
                  .
                  .
                  └── 21
    

    And run:

    $ ./evaluate_semantics.py --dataset /path/to/kitti/dataset/ --predictions /path/to/method_predictions --split train/valid/test # depending of desired split to evaluate

    or

    $ ./evaluate_completion.py --dataset /path/to/kitti/dataset/ --predictions /path/to/method_predictions --split train/valid/test # depending of desired split to evaluate

    or

    $ ./evaluate_panoptic.py --dataset /path/to/kitti/dataset/ --predictions /path/to/method_predictions --split train/valid/test # depending of desired split to evaluate

    or for moving object segmentation

    $ ./evaluate_mos.py --dataset /path/to/kitti/dataset/ --predictions /path/to/method_predictions --split train/valid/test # depending of desired split to evaluate
  • In the same directory as the dataset

    /kitti/dataset/
              ├── poses
              └── sequences
                  ├── 00
                  │   ├── image_2
                  │   ├── image_3
                  │   ├── labels
                  │   │     ├ 000000.label
                  │   │     └ 000001.label
                  │   ├── predictions
                  │   │     ├ 000000.label
                  │   │     └ 000001.label
                  │   └── velodyne
                  │         ├ 000000.bin
                  │         └ 000001.bin
                  ├── 01
                  ├── 02
                  .
                  .
                  .
                  └── 21
    

    And run (which sets the predictions directory as the same directory as the dataset):

    $ ./evaluate_semantics.py --dataset /path/to/kitti/dataset/ --split train/valid/test # depending of desired split to evaluate

If instead, the IoU vs distance is wanted, the evaluation is performed in the same way, but with the evaluate_semantics_by_distance.py script. This will analyze the IoU for a set of 5 distance ranges: {(0m:10m), [10m:20m), [20m:30m), [30m:40m), (40m:50m)}.

Validation

To ensure that your zip file is valid, we provide a small validation script validate_submission.py that checks for the correct folder structure and consistent number of labels for each scan.

The submission folder expects to get an zip file containing the following folder structure (as the separate case above)

├ description.txt (optional)
sequences
  ├── 11
  │   └── predictions
  │         ├ 000000.label
  │         ├ 000001.label
  │         ├ ...
  ├── 12
  │   └── predictions
  │         ├ 000000.label
  │         ├ 000001.label
  │         ├ ...
  ├── 13
  .
  .
  .
  └── 21

In summary, you only have to provide the label files containing your predictions for every point of the scan and this is also checked by our validation script.

Run:

$ ./validate_submission.py --task {segmentation|completion|panoptic} /path/to/submission.zip /path/to/kitti/dataset

to check your submission.zip.

Note: We don't check if the labels are valid, since invalid labels are simply ignored by the evaluation script.

(New!) Adding Approach Information

If you want to have more information on the leaderboard in the new updated Codalab competitions under the "Detailed Results", you have to provide an additional description.txt file to the submission archive containing information (here just an example):

name: Auto-MOS
pdf url: https://arxiv.org/pdf/2201.04501.pdf
code url: https://github.com/PRBonn/auto-mos

where name corresponds to the name of the method, pdf url is a link to the paper pdf url (or empty), and code url is a url that directs to the code (or empty). If the information is not available, we will use Anonymous for the name, and n/a for the urls.

Statistics

  • content.py allows to evaluate the class content of the training set, in order to weigh the loss for training, handling imbalanced data.
  • count.py returns the scan count for each sequence in the data.

Generation

  • generate_sequential.py generates a sequence of scans using the manually looped closed poses used in our labeling tool, and stores them as individual point clouds. If, for example, we want to generate a dataset containing, for each point cloud, the aggregation of itself with the previous 4 scans, then:

    $ ./generate_sequential.py --dataset /path/to/kitti/dataset/ --sequence_length 5 --output /path/to/put/new/dataset 
  • remap_semantic_labels.py allows to remap the labels to and from the cross-entropy format, so that the labels can be used for training, and the predictions can be used for evaluation. This file uses the learning_map and learning_map_inv dictionaries from the config file to map the labels and predictions.

Docker for API

If not installing the requirements is preferred, then a docker container is provided to run the scripts.

To build and run the container in an interactive session, which allows to run X11 apps (and GL), and copies this repo to the working directory, use

$ ./docker.sh /path/to/dataset

Where /path/to/dataset is the location of your semantic kitti dataset, and will be available inside the image in ~/data or /home/developer/data inside the container for further usage with the api. This is done by creating a shared volume, so it can be any directory containing data that is to be used by the API scripts.

Citation:

If you use this dataset and/or this API in your work, please cite its paper

@inproceedings{behley2019iccv,
    author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
     title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
 booktitle = {Proc. of the IEEE/CVF International Conf.~on Computer Vision (ICCV)},
      year = {2019}
}

And the paper for the original KITTI dataset:

@inproceedings{geiger2012cvpr,
    author = {A. Geiger and P. Lenz and R. Urtasun},
     title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
 booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
     pages = {3354--3361},
      year = {2012}}

More Repositories

1

kiss-icp

A LiDAR odometry pipeline that just works
Python
1,479
star
2

depth_clustering

🚕 Fast and robust clustering of point clouds generated with a Velodyne sensor.
C++
1,105
star
3

lidar-bonnetal

Semantic and Instance Segmentation of LiDAR point clouds for autonomous driving
Python
912
star
4

semantic_suma

SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)
C++
902
star
5

OverlapNet

OverlapNet - Loop Closing for 3D LiDAR-based SLAM (chen2020rss)
Python
649
star
6

LiDAR-MOS

(LMNet) Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data (RAL/IROS 2021)
Python
574
star
7

vdbfusion

C++/Python Sparse Volumetric TSDF Fusion
C++
456
star
8

SHINE_mapping

🌟 SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations (ICRA 2023)
Python
443
star
9

puma

Poisson Surface Reconstruction for LiDAR Odometry and Mapping
Python
400
star
10

PIN_SLAM

📍PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency [TRO' 24]
Python
341
star
11

bonnet

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
Python
323
star
12

range-mcl

Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps (chen2021icra)
Python
278
star
13

overlap_localization

chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.
Python
270
star
14

rangenet_lib

Inference module for RangeNet++ (milioto2019iros, chen2019iros)
C++
238
star
15

refusion

ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals
C++
235
star
16

bonnetal

Bonnet and then some! Deep Learning Framework for various Image Recognition Tasks. Photogrammetry and Robotics Lab, University of Bonn
Python
226
star
17

4DMOS

Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions (RAL 2022)
Python
201
star
18

MapClosures

Effectively Detecting Loop Closures using Point Cloud Density Maps
Python
196
star
19

LiDiff

[CVPR'24] Scaling Diffusion Models to Real-World 3D LiDAR Scene Completion
Python
194
star
20

visual-crop-row-navigation

This is a visual-servoing based robot navigation framework tailored for navigating in row-crop fields. It uses the images from two on-board cameras and exploits the regular crop-row structure present in the fields for navigation, without performing explicit localization or mapping. It allows the robot to follow the crop-rows accurately and handles the switch to the next row seamlessly within the same framework.
C++
178
star
21

pole-localization

Online Range Image-based Pole Extractor for Long-term LiDAR Localization in Urban Environments
Python
167
star
22

online_place_recognition

Graph-based image sequences matching for the visual place recognition in changing environments.
C++
150
star
23

agribot

The mission of the project is to build an agricultural robot (AgriBot) from scratch with the aim of serving as a data-recording platform in fields. For further information about the design and purpose of the robot, please follow the About the AgriBot Project page
C++
143
star
24

LocNDF

LocNDF: Neural Distance Field Mapping for Robot Localization
Python
136
star
25

4dNDF

3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation (CVPR 2024)
Python
131
star
26

make_it_dense

Make it Dense: Self-Supervised Geometric Scan Completion of Sparse 3D LiDAR Scans in Large Outdoor Environments
Python
127
star
27

point-cloud-prediction

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks
Python
125
star
28

ir-mcl

IR-MCL: Implicit Representation-Based Online Global Localization https://arxiv.org/abs/2210.03113
Python
120
star
29

MutiverseOdometry

Code for Simple But Effective Redundant Odometry for Autonomous Vehicles
C++
111
star
30

vpr_relocalization

The framework for visual place recognition in changing enviroments. Matches two sequence of images of arbitrary trajectory overlap.
C++
107
star
31

TARL

[CVPR'23] TARL: Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving
Python
99
star
32

lidar-visualizer

A LiDAR visualization tool for all your datasets
Python
96
star
33

deep-point-map-compression

Python
95
star
34

segcontrast

Python
92
star
35

auto-mos

Automatic Labeling to Generate Training Data for Online LiDAR-based Moving Object Segmentation
Python
91
star
36

3DUIS

Python
80
star
37

lidar_transfer

Code for Langer et al. "Domain Transfer for Semantic Segmentation of LiDAR Data using Deep Neural Networks", IROS, 2020.
Python
70
star
38

descriptor-dr

[ICRA 2023] Learning-Based Dimensionality Reduction for Computing Compact and Effective Local Feature Descriptors
Python
61
star
39

hsmcl

C++
60
star
40

SIMP

Python
59
star
41

ContMAV

[CVPR2024] Open-world Semantic Segmentation Including Class Similarity
Python
59
star
42

extrinsic_calibration

Motion Based Multi-Sensor Extrinsic Calibration
Python
57
star
43

vdbfusion_ros

ROS1 Wrapper for VDBFusion https://github.com/PRBonn/vdbfusion
C++
57
star
44

DCPCR

DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments
Python
55
star
45

HortiMapping

🫑 Panoptic Mapping with Fruit Completion and Pose Estimation for Horticultural Robots (IROS' 23)
Python
53
star
46

fast_change_detection

Fast Image-Based Geometric Change Detection Given a 3D Model
C++
44
star
47

contrastive_association

Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans
Python
43
star
48

retriever

Point Cloud-based Place Recognition in Compressed Map
Python
40
star
49

4d_plant_registration

Python
38
star
50

tmcl

Text Guided MCL
C++
34
star
51

dynamic-point-removal

Static Map Generation from 3D LiDAR Point Clouds Exploiting Ground Segmentation
Python
34
star
52

MaskPLS

Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving, RA-L, 2023
Python
32
star
53

manifold_python

Python bindings for https://github.com/hjwdzh/Manifold
C++
30
star
54

PS-res-excite

Python
26
star
55

kppr

KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition
Python
26
star
56

goPro-meta

App to sample images from goPro Hero 5 video and syncronize sensor frames to them. Output is yaml file and extracted images.
C++
25
star
57

geometrical_stem_detection

Code for fast and accurate geometrical plant stem detection
C++
24
star
58

PartiallyObservedInverseGames.jl

An inverse game solver for inferring objectives from noise-corrupted partial state observations of non-cooperative multi-agent interactions.
Julia
23
star
59

pybonirob

Set of tools to access bonirob datasets in Python
Python
23
star
60

phenobench-baselines

Baselines of the PhenoBench Dataset
Python
20
star
61

voxblox_pybind

Python bindings for the Voxblox library
C++
20
star
62

catkin_tools_fetch

🐕 "fetch" and "update" dependencies of projects in your catkin workspace with a new verb "dependencies" for catkin_tools
Python
16
star
63

nuscenes2kitti

Python
16
star
64

StyleGenForLabels

StyleGAN-based generation of labels for crop-weed segmentation
Python
12
star
65

plants_temporal_matcher

This system can perform 3D point-to-point associations between plants' point clouds acquired in different session even in presence of highly repetitive structures and drastic changes.
Python
12
star
66

ipb_homework_checker

✔️ A generic homework checker that we use to automatically check students homework
Python
11
star
67

leaf_mesher

Precise 3D Reconstruction of Plants from UAV Imagery Combining Bundle Adjustment and Template Matching
9
star
68

HAPT

Python
9
star
69

sigf

Image Matching for Crop Fields Using Similarity Invariant Geometric Feature
MATLAB
8
star
70

DG-CWS

Towards Domain Generalization in Crop and Weed Segmentation for Precision Farming Robots
Python
7
star
71

agri-pretraining

Python
7
star
72

leaf-plant-instance-segmentation

In-Field Phenotyping Based on Crop Leaf and Plant Instance Segmentation
Python
5
star
73

MinkowskiPanoptic

Panoptic segmentation baseline implemented based on the MinkowskiEngine library
Python
5
star
74

Unsupervised-Pre-Training-for-3D-Leaf-Instance-Segmentation

Official repository of Unsupervised Pre-Training for 3D Leaf Instance Segmentation by Roggiolani et al.
Python
5
star
75

vdb_to_numpy

Tool to convert VDB grids to numpy arrays.
Jupyter Notebook
4
star
76

g2o_catkin

:octocat: G2O meets catkin
CMake
3
star
77

ipb_workspace

An empty default workspace for development inside IPB lab
3
star
78

plant_pcd_segmenter

High Precision Leaf Instance Segmentation for Phenotyping in Point Clouds Obtained Under Real Field Conditions
2
star
79

cinderella-geometric-animations

Animations of geometric properties relevant to Photogrammetry, Computer Vision and Robotics created with Cinderella
HTML
1
star