• Stars
    star
    259
  • Rank 157,669 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created about 2 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Monte Carlo Localization using Neural Radiance Fields

Loc-NeRF

Monte Carlo Localization using Neural Radiance Fields.

Coordinate Frames

To be consistent throughout the code and in the yaml files, we define coordinates using the camera frame commonly used for NeRF (x right, y up, z inward from the perspective of the camera) unless stated otherwise. Coordinates are FROM Camera TO World unless otherwise stated. Note this is not the same as the more common camera frame used in robotics (x right, y down, z outward).

Publications

If you find this code relevant for your work, please consider citing our paper:

Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields

This work was done in collaboration with MIT and Draper Labs and was partially funded by the NASA Flight Opportunities under grant Nos 80NSSC21K0348, ARL DCIST CRA W911NF-17-2-0181, and an Amazon Research Award.

1. Installation

A. Prerequisities

  • Install ROS by following ROS website.

  • If you want to use VIO for the predict step for a real robot demo, install a VIO such as VINS-Fusion or Kimera.

2. Loc-NeRF installation

# Setup catkin workspace
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/
catkin init

# Clone repo
cd ~/catkin_ws/src
git clone https://github.com/MIT-SPARK/Loc-NeRF

# Compile code
catkin build

# source workspace
source ~/catkin_ws/devel/setup.bash

#install dependencies:
cd ~/catkin_ws/src/Loc-NeRF
pip install -r requirements.txt

Starting Loc-NeRF

We will use ROS and rviz as a structure for running Loc-NeRF and for visualizing performance. As a general good practice, remember to source your workspace for each terminal you use.

  1. Open a new terminal and run: roscore

  2. In another terminal, launch Loc-NeRF:

roslaunch locnerf navigate.launch parameter_file:=<param_file.yaml>
  1. In another terminal, launch rviz for visualization:
rviz -d $(rospack find locnerf)/rviz/rviz.rviz
  1. If you are not running with a rosbag, i.e. you are using LLFF data, then Loc-NeRF should start and you should be set. If you are using a rosbag, continue to the next steps.

  2. In another terminal launch VIO

  3. Finally, in another terminal, play your rosbag:

rosbag play /PATH/TO/ROSBAG

Provided config files

We provide three yaml files in /cfg to get you started.

jackal.yaml is setup to run a real-time demo. The predict step runs at the rate of available prediction poses (for which we used VIO). The update step processes incoming images as fast as computation limits will allow (in our case about 2.5 Hz) and discards all other images to prevent large latency.

llff.yaml runs Loc-NeRF on the LLFF dataset as described in our paper.

llff_global.yaml runs Loc-NeRF on the LLFF dataset with a wider spread of particles to test the ability to perform global localization as described in our paper.

3. Usage

Currently we provide example code to run Loc-NeRF on two types of data: running on LLFF data to compare with iNeRF (cite), and running a real-time demo with custom data. For both we use NeRF-Pytorch (cite) as our NeRF map.

The fastest way to start running Loc-NeRF is to download LLFF data with pre-trained NeRF weights. We also provide instructions for running a demo on a real robot, which requires training a NeRF with metric scaled poses.

Using LLFF data

Download LLFF images and pretrained NeRF-Pytorch weights from NeRF-Pytorch. If you download our fork of iNeRF here: iNeRF then the configs and ckpts folder will already be setup correctly with the pre-trained weights, and you just need to add the data folder from NeRF-Pytorch.

Place data using the following structure:

├── configs   
│   ├── ...
├── ckpts                                                                                                       
│   │   ├── fern
|   |   |   └── fern.tar                                                                                                                     
│   │   ├── fortress
|   |   |   └── fortress.tar                                                                                   
│   │   ├── horns
|   |   |   └── horns.tar   
│   │   ├── room
|   |   |   └── room.tar   
|   |   └── ...                                                                                 
                                                                                            
├── data                                                                                                                                                                                                       
│   ├── nerf_llff_data                                                                                                  
│   │   └── fern  # downloaded llff dataset                                                                                                                         
│   │   └── fortress  # downloaded llff dataset                                                                                  
│   │   └── horns   # downloaded llff dataset
|   |   └── room   # downloaded llff dataset
|   |   └── ...

After updating your yaml file llff.yaml with the directory where you placed the data and any other params you want to change, you are ready to run Loc-NeRF! By default, Loc-NeRF will estimate the camera pose of 5 random images from each of fern, fortress, horns, room (20 images in total). You can use rviz to provide real-time visualization of the ground truth pose and the particles.

Plotting results

If you log results from Loc-NeRF, we provide code to plot the position and rotation error inside tools/eval_logs.py. The script also contains code to automatically run iNeRF using the same initial conditions that Loc-NeRF used and plot the results of iNeRF.

If you don't want to run iNeRF, running the plotting code is as easy as changing a few parameters at the top of tools/eval_logs.py and then running python3 tools/eval_logs.py. Note that run_inerf should be set to False.

Automatically generating iNeRF - Loc-NeRF comparison

Install our fork of iNeRF which is set up to interface with our automatic comparison script.

To automatically run iNeRF after you have logged data from Loc-NeRF (because we need to know the initial start for iNeRF), run the following: python3 tools/eval_logs.py --config /home/dominic/inerf/configs/fern.txt --data_dir /home/dominic/inerf/data/nerf_llff_data/ --ckpt_dir /home/dominic/inerf/ckpts/ --N_importance 64 --batch_size 2048. Note that run_inerf should be set to True. This will both run iNeRF and log the results so you don't need to rerun iNeRF. Note that to make the interface to iNeRF easier, we directly pass a specific config in the cmd line (in this case fern.txt), but setting N_importance and batch_size directly, along with the provided scripts, ensure that the correct params are used for each LLFF dataset.

Using Custom data for real-time experiment

To run Loc-NeRF on your robot or simulator, you will first need to train a NeRF with metric scaled poses. This is important so that a movement estimated in the world by the predict step roughly corresponds to a movement inside the NeRF. There are several options for training a NeRF, but for ours we used https://colmap.github.io/faq.html#geo-registration. We first estimate a unscaled trajectory with Colmap and then provide Colmap with a small subset of scaled poses that it uses to scale the entire trajectory.

Our procedure is as follows:

  1. Record a rosbag with RGB images (for NeRF) along with stereo images and IMU (for VIO).

  2. use the provided script tools/ros_to_jpg.py to convert a downsampled set of images from the rosbag into images for Colmap

  3. Use the imgs2poses.py script provided at LLFF to generate Colmap poses for the trajectory up to scale.

  4. You should now have a set of poses up to scale. To add scale, you will need to determine the metric scaled pose of a small subset of images (we used five images) and use the provided Colmap function at colmap-add-scale to generate optimized metric scaled poses for your dataset.

  5. Use our fork of NeRF-Pytorch at train-nerf to train a NeRF with metric scaled poses.

  6. Now you are all set to run Loc-NeRF with a rosbag for a real-time demo.

4. FAQ

This code is under active development with more updates coming soon. Below are some known common issues/TODOs. We also welcome PRs and feedback of any encountered issues to keep improving this code for the community:

  1. We use ROS as a way to provide real-time visualzation with rviz and and as a structure to run on real robot platforms. For some users, a reliance on ROS may be an uneccessary requirement and future work is to separate Loc-NeRF from a ROS wrapper.

Third-party code:

Parts of this code were based on this pytorch implementation of iNeRF and NeRF-Pytorch.

NeRF-Pytorch:

@misc{lin2020nerfpytorch,
 title={NeRF-pytorch},
 author={Yen-Chen, Lin},
 publisher = {GitHub},
 journal = {GitHub repository},
 howpublished={\url{https://github.com/yenchenlin/nerf-pytorch/}},
 year={2020}
}

More Repositories

1

Kimera

Index repo for Kimera code
1,800
star
2

TEASER-plusplus

A fast and robust point cloud registration library
C++
1,753
star
3

Kimera-VIO

Visual Inertial Odometry with SLAM capabilities and 3D Mesh generation.
C++
1,534
star
4

Kimera-Semantics

Real-Time 3D Semantic Reconstruction from 2D data
C++
640
star
5

Hydra

C++
583
star
6

Kimera-RPGO

Robust Pose Graph Optimization
C++
468
star
7

Kimera-VIO-ROS

ROS wrapper for Kimera-VIO
C++
371
star
8

Kimera-Multi

Index repo for Kimera-Multi system
Shell
327
star
9

Khronos

Spatio-Temporal Metric-Semantic SLAM
C++
273
star
10

Kimera-Multi-Data

A large-scale multi-robot dataset for multi-robot SLAM
149
star
11

GNC-and-ADAPT

Graduated Non-Convexity (GNC) and Adaptive Trimming (ADAPT) algorithms for outlier robust estimation
MATLAB
112
star
12

CertifiablyRobustPerception

Certifiable Outlier-Robust Geometric Perception
MATLAB
107
star
13

PD-MeshNet

Primal-Dual Mesh Convolutional Neural Networks
Python
106
star
14

GlobalOptimizationTutorial

Hands-on Tutorial for Global Optimization in Matlab
MATLAB
104
star
15

llm_scene_understanding

HTML
71
star
16

STRIDE

Solver for Large-Scale Rank-One Semidefinite Relaxations
MATLAB
66
star
17

Clio

66
star
18

VNAV-labs

Labs for MIT 16.485
Python
64
star
19

Spatial-Hash

Minimal C++ library for spatial data structures based on voxel hashing
C++
57
star
20

KISS-Matcher

52
star
21

Hydra-ROS

Hydra ROS Interface
C++
39
star
22

Spark-DSG

Scene Graph API (C++ and Python)
C++
37
star
23

Kimera-VIO-ROS2

C++
36
star
24

Kimera-Multi-LCD

C++
34
star
25

Kimera-VIO-Evaluation

Code to automatically evaluate and tune parameters for Kimera-VIO pipeline.
Python
33
star
26

Kimera-PGMO

C++
33
star
27

ROBIN

C++
31
star
28

config_utilities

Automatic C++ config structs and tools.
C++
28
star
29

pose_graph_tools

C++
26
star
30

pose-baselines

Jupyter Notebook
25
star
31

estimation-contracts

MATLAB
25
star
32

neural_tree

Python
22
star
33

C-3PO

Python
21
star
34

Kimera-Distributed

C++
21
star
35

MiDiffusion

Python
17
star
36

LP2

Long-term Human Trajectory Prediction using 3D DSGs
15
star
37

GlobalOptimization-ICCV2019

ICCV 2019 Tutorial: Global Optimization for Geometric Understanding with Provable Guarantees
TeX
14
star
38

FUSES

C++
11
star
39

Hydra-GNN

Room classification network training and inference code
Python
10
star
40

certifiable_tracking

MATLAB
8
star
41

PACE

Index repo for the PACE project
Python
7
star
42

robotRepresentations-RSS2023

Robot Representations Workshop @ RSS 2023
SCSS
6
star
43

ensemble_pose

Self-training for an ensemble of object pose estimators
Python
5
star
44

ford-paper-params

Parameters for competitor pipelines for the Kimera Multicamera project
5
star
45

CertifiablePerception-RSS2020

Workshop website on Certifiable Robot Perception for RSS2020
TeX
5
star
46

Longterm-Perception-WS

Workshop on Long-term Perception for Autonomy in Dynamic Human-shared Environments
SCSS
4
star
47

ThreedFront

Python
4
star
48

PerceptionMonitoring

Python
3
star
49

dbow2_catkin

CMake
2
star
50

PAL-ICRA2020

Ruby
2
star
51

kimera-multi-vocab

Vocabulary file for Kimera-Multi in the Kimera-Multi dataset
2
star
52

VNAV2020-handouts

Handouts/webpage for VNAV 2020
HTML
1
star