News:
We have renamed the branch 1.1
to main
and switched the default branch from master
to main
. We encourage
users to migrate to the latest version, though it comes with some cost. Please refer to Migration Guide for more details.
v1.1.1 was released in 30/5/2023
We have constructed a comprehensive LiDAR semantic segmentation benchmark on SemanticKITTI, including Cylinder3D, MinkUNet and SPVCNN methods. Noteworthy, the improved MinkUNetv2 can achieve 70.3 mIoU on the validation set of SemanticKITTI. We have also supported the training of BEVFusion and an occupancy prediction method, TPVFomrer, in our projects
. More new features about 3D perception are on the way. Please stay tuned!
Introduction
English | 简体中文
The main branch works with PyTorch 1.8+.
MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.
Major features
-
Support multi-modality/single-modality detectors out of box
It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.
-
Support indoor/outdoor 3D detection out of box
It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support nuImages dataset.
-
Natural integration with 2D detection
All the about 300+ models, methods of 40+ papers, and modules supported in MMDetection can be trained or used in this codebase.
-
High efficiency
It trains faster than other codebases. The main results are as below. Details can be found in benchmark.md. We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by
✗
.Methods MMDetection3D OpenPCDet votenet Det3D VoteNet 358 ✗ 77 ✗ PointPillars-car 141 ✗ ✗ 140 PointPillars-3class 107 44 ✗ ✗ SECOND 40 30 ✗ ✗ Part-A2 17 14 ✗ ✗
Like MMDetection and MMCV, MMDetection3D can also be used as a library to support different projects on top of it.
License
This project is released under the Apache 2.0 license.
Changelog
1.1.0 was released in 6/4/2023.
Please refer to changelog.md for details and release history.
Benchmark and model zoo
Results and models are available in the model zoo.
Backbones | Heads | Features |
|
3D Object Detection | Monocular 3D Object Detection | Multi-modal 3D Object Detection | 3D Semantic Segmentation |
|
|
|
|
ResNet | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | Cylinder3D | MinkUNet | |
---|---|---|---|---|---|---|---|---|---|
SECOND | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
PointPillars | ✗ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
FreeAnchor | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
VoteNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
H3DNet | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
3DSSD | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Part-A2 | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
MVXNet | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
CenterPoint | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
SSN | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ |
ImVoteNet | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
FCOS3D | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
PointNet++ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Group-Free-3D | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
ImVoxelNet | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
PAConv | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
DGCNN | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
SMOKE | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
PGD | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
MonoFlex | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
SA-SSD | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
FCAF3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
PV-RCNN | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
Cylinder3D | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
MinkUNet | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
SPVCNN | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
Note: All the about 300+ models, methods of 40+ papers in 2D detection supported by MMDetection can be trained or used in this codebase.
Installation
Please refer to get_started.md for installation.
Get Started
Please see get_started.md for the basic usage of MMDetection3D. We provide guidance for quick run with existing dataset and with new dataset for beginners. There are also tutorials for learning configuration systems, customizing dataset, designing data pipeline, customizing models, customizing runtime settings and Waymo dataset.
Please refer to FAQ for frequently asked questions. When updating the version of MMDetection3D, please also check the compatibility doc to be aware of the BC-breaking updates introduced in each version.
Citation
If you find this project useful in your research, please consider cite:
@misc{mmdet3d2020,
title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
author={MMDetection3D Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
year={2020}
}
Contributing
We appreciate all contributions to improve MMDetection3D. Please refer to CONTRIBUTING.md for the contributing guideline.
Acknowledgement
MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.
Projects in OpenMMLab
- MMEngine: OpenMMLab foundational library for training deep learning models.
- MMCV: OpenMMLab foundational library for computer vision.
- MMEval: A unified evaluation library for multiple machine learning libraries.
- MIM: MIM installs OpenMMLab packages.
- MMPreTrain: OpenMMLab pre-training toolbox and benchmark.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
- MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
- MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
- MMRazor: OpenMMLab model compression toolbox and benchmark.
- MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMFlow: OpenMMLab optical flow toolbox and benchmark.
- MMagic: OpenMMLab Advanced, Generative and Intelligent Creation toolbox.
- MMGeneration: OpenMMLab image and video generative models toolbox.
- MMDeploy: OpenMMLab model deployment framework.