• Stars
    star
    337
  • Rank 125,272 (Top 3 %)
  • Language
    Python
  • License
    Other
  • Created about 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2023] vMAP: Vectorised Object Mapping for Neural Field SLAM

vMAP: Vectorised Object Mapping for Neural Field SLAM

Xin Kong · Shikun Liu · Marwan Taher · Andrew Davison

Paper | Video | Project Page

Logo

vMAP builds an object-level map from a real-time RGB-D input stream. Each object is represented by a separate MLP neural field model, all optimised in parallel via vectorised training.


We provide the implementation of the following neural-field SLAM frameworks:

  • vMAP [Official Implementation]
  • iMAP [Simplified and Improved Re-Implementation, with depth guided sampling]

Install

First, let's start with a virtual environment with the required dependencies.

conda env create -f environment.yml

Dataset

Please download the following datasets to reproduce our results.

  • Replica Demo - Replica Room 0 only for faster experimentation.
  • Replica - All Pre-generated Replica sequences. For Replica data generation, please refer to directory data_generation.
  • ScanNet - Official ScanNet sequences. Each dataset contains a sequence of RGB-D images, as well as their corresponding camera poses, and object instance labels. To extract data from ScanNet .sens files, run
    conda activate py2
    python2 reader.py --filename ~/data/ScanNet/scannet/scans/scene0024_00/scene0024_00.sens --output_path ~/data/ScanNet/objnerf/ --export_depth_images --export_color_images --export_poses --export_intrinsics

Config

Then update the config files in configs/.json with your dataset paths, as well as other training hyper-parameters.

"dataset": {
        "path": "path/to/ims/folder/",
    }

Running vMAP / iMAP

The following commands will run vMAP / iMAP in a single-thread setting.

vMAP

python ./train.py --config ./configs/Replica/config_replica_room0_vMAP.json --logdir ./logs/vMAP/room0 --save_ckpt True

iMAP

python ./train.py --config ./configs/Replica/config_replica_room0_iMAP.json --logdir ./logs/iMAP/room0 --save_ckpt True

Evaluation

To evaluate the quality of reconstructed scenes, we provide two different methods,

3D Scene-level Evaluation

The same metrics following the original iMAP, to compare with GT scene meshes by Accuracy, Completion and Completion Ratio.

python ./metric/eval_3D_scene.py

3D Object-level Evaluation

We also provide the object-level metrics by computing the same metrics but averaging across all objects in a scene.

python ./metric/eval_3D_obj.py

Results

We provide raw results, including 3D meshes, 2D novel view rendering, and evaluated metrics of vMAP and iMAP* for easier comparison.

Acknowledgement

We would like thank the following open-source repositories that we have build upon for the implementation of this work: NICE-SLAM, and functorch.

Citation

If you found this code/work to be useful in your own research, please considering citing the following:

@article{kong2023vmap,
  title={vMAP: Vectorised Object Mapping for Neural Field SLAM},
  author={Kong, Xin and Liu, Shikun and Taher, Marwan and Davison, Andrew J},
  journal={arXiv preprint arXiv:2302.01838},
  year={2023}
}
@inproceedings{sucar2021imap,
  title={iMAP: Implicit mapping and positioning in real-time},
  author={Sucar, Edgar and Liu, Shikun and Ortiz, Joseph and Davison, Andrew J},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={6229--6238},
  year={2021}
}