https://github.com/qixuxiang/orb-slam2_with_semantic_labelling
There are so many large files in .git folder and I hate them, so I move the code toorb-slam2_with_semantic_label
Authors: Xuxiang Qi([email protected]),Shaowu Yang([email protected]),Yuejin Yan([email protected])
Current version: 1.0.0
0.introduction
orb-slam2_with_semantic_label is a visual SLAM system based on ORB_SLAM2[1-2]. The ORB-SLAM2 is a great visual SLAM method that has been popularly applied in robot applications. However, this method cannot provide semantic information in environmental mapping.In this work,we present a method to build a 3D dense semantic map,which utilize both 2D image labels from YOLOv3[3] and 3D geometric information.
1. Related Publications
Deep Learning Based Semantic Labelling of 3D Point Cloud in Visual SLAM
2. Prerequisites
2.1 requirements
-
Ubuntu 14.04/Ubuntu 16.04/Ubuntu 18.04
-
ORB-SLAM2
-
CUDA
-
GCC >= 5.0
-
cmake
-
OpenCV
-
PCL1.7 or PCL1.8, may not work with PCL1.9
-
libTorch 1.4
PS:(Ubuntu18.04 CUDA10.1 opencv3.4 Eigen3.2.10 PCL1.8 has tested successfully)
2.2 Installation
Refer to the corresponding original repositories (ORB_SLAM2 and YOLO for installation tutorial).
2.3 Build
git clone https://github.com/qixuxiang/orb-slam2_with_semantic_label.git
sh build.sh
3. Run the code
-
Download
yolov3.weights
,yolov3.cfg
andcoco.names
from darknet and put them inbin
folder. Also, these files can be found in YOLO V3.Then, you should make a dir namedimg
inbin
folder, that is, you should execute commandsudo mkdir img
inbin
folder. you can use libtorch-yolov3 replace libYOLOv3SE, see details https://blog.csdn.net/TM431700/article/details/105889614). -
Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it to
data
folder. -
Associate RGB images and depth images using the python script associate.py. We already provide associations for some of the sequences in
Examples/RGB-D/associations/
. You can generate your own associations file executing:
python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
- Change
TUMX.yaml
to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. ChangePATH_TO_SEQUENCE_FOLDER
to the uncompressed sequence folder.You can run the project by:
cd bin
./rgbd_tum ../Vocabulary/ORBvoc.txt ../Examples/RGB-D/TUM2.yaml ../data/rgbd-data ../data/rgbd-data/associations.txt
update
- update 20200705: fix segment fault, make system run faster and use libtorch, thanks for vayneli!
Reference
[1] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
[2] Mur-Artal R, Tardos J D. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras[J]. arXiv preprint arXiv:1610.06475, 2016.
[3] Redmon, Joseph, and A. Farhadi. "YOLOv3: An Incremental Improvement." (2018).
License
Our system is released under a GPLv3 license.
If you want to use code for commercial purposes, please contact the authors.
Other issue
-
We do not test the code there on ROS bridge/node.The system relies on an extremely fast and tight coupling between the mapping and tracking on the GPU, which I don't believe ROS supports natively in terms of message passing.
-
Welcome to submit any issue if you have problems, and add your software and computer system information details, such as Ubuntu 16/14,OpenCV 2/3, CUDA 9.0, GCC5.4,etc..
-
We provide a video here.