ngp_pl
nerfstudio! There are a lot of recent improvements on nerf related methods, including instant-ngp!
Advertisement: Check out the latest integrated projectInstant-ngp (only NeRF) in pytorch+cuda trained with pytorch-lightning (high quality with high speed). This repo aims at providing a concise pytorch interface to facilitate future research, and am grateful if you can share it (and a citation is highly appreciated)!
- Official CUDA implementation
- torch-ngp another pytorch implementation that I highly referenced.
๐๏ธ Gallery
gui.mp4
Other representative videos are in GALLERY.md
๐ป Installation
This implementation has strict requirements due to dependencies on other libraries, if you encounter installation problem due to hardware/software mismatch, I'm afraid there is no intention to support different platforms (you are welcomed to contribute).
Hardware
- OS: Ubuntu 20.04
- NVIDIA GPU with Compute Compatibility >= 75 and memory > 6GB (Tested with RTX 2080 Ti), CUDA 11.3 (might work with older version)
- 32GB RAM (in order to load full size images)
Software
-
Clone this repo by
git clone https://github.com/kwea123/ngp_pl
-
Python>=3.8 (installation via anaconda is recommended, use
conda create -n ngp_pl python=3.8
to create a conda environment and activate it byconda activate ngp_pl
) -
Python libraries
- Install pytorch by
pip install torch==1.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
- Install
torch-scatter
following their instruction - Install
tinycudann
following their instruction (pytorch extension) - Install
apex
following their instruction - Install core requirements by
pip install -r requirements.txt
- Install pytorch by
-
Cuda extension: Upgrade
pip
to >= 22.1 and runpip install models/csrc/
(please run this each time youpull
the code)
๐ Supported Datasets
- NSVF data
Download preprocessed datasets (Synthetic_NeRF
, Synthetic_NSVF
, BlendedMVS
, TanksAndTemples
) from NSVF. Do not change the folder names since there is some hard-coded fix in my dataloader.
- NeRF++ data
Download data from here.
- Colmap data
For custom data, run colmap
and get a folder sparse/0
under which there are cameras.bin
, images.bin
and points3D.bin
. The following data with colmap format are also supported:
- nerf_llff_data
- mipnerf360 data
- HDR-NeRF data. Additionally, download my colmap pose estimation from here and extract to the same location.
- RTMV data
Download data from here. To convert the hdr images into ldr images for training, run python misc/prepare_rtmv.py <path/to/RTMV>
, it will create images/
folder under each scene folder, and will use these images to train (and test).
๐ Training
Quickstart: python train.py --root_dir <path/to/lego> --exp_name Lego
It will train the Lego scene for 30k steps (each step with 8192 rays), and perform one testing at the end. The training process should finish within about 5 minutes (saving testing image is slow, add --no_save_test
to disable). Testing PSNR will be shown at the end.
More options can be found in opt.py.
For other public dataset training, please refer to the scripts under benchmarking
.
๐ Testing
Use test.ipynb
to generate images. Lego pretrained model is available here
GUI usage: run python show_gui.py
followed by the same hyperparameters used in training (dataset_name
, root_dir
, etc) and add the checkpoint path with --ckpt_path <path/to/.ckpt>
Comparison with torch-ngp and the paper
I compared the quality (average testing PSNR on Synthetic-NeRF
) and the inference speed (on Lego
scene) v.s. the concurrent work torch-ngp (default settings) and the paper, all trained for about 5 minutes:
Method | avg PSNR | FPS | GPU |
---|---|---|---|
torch-ngp | 31.46 | 18.2 | 2080 Ti |
mine | 32.96 | 36.2 | 2080 Ti |
instant-ngp paper | 33.18 | 60 | 3090 |
As for quality, mine is slightly better than torch-ngp, but the result might fluctuate across different runs.
As for speed, mine is faster than torch-ngp, but is still only half fast as instant-ngp. Speed is dependent on the scene (if most of the scene is empty, speed will be faster).
๐น Benchmarks
To run benchmarks, use the scripts under benchmarking
.
Followings are my results trained using 1 RTX 2080 Ti (qualitative results here):
Synthetic-NeRF
Mic | Ficus | Chair | Hotdog | Materials | Drums | Ship | Lego | AVG | |
---|---|---|---|---|---|---|---|---|---|
PSNR | 35.59 | 34.13 | 35.28 | 37.35 | 29.46 | 25.81 | 30.32 | 35.76 | 32.96 |
SSIM | 0.988 | 0.982 | 0.984 | 0.980 | 0.944 | 0.933 | 0.890 | 0.979 | 0.960 |
LPIPS | 0.017 | 0.024 | 0.025 | 0.038 | 0.070 | 0.076 | 0.133 | 0.022 | 0.051 |
FPS | 40.81 | 34.02 | 49.80 | 25.06 | 20.08 | 37.77 | 15.77 | 36.20 | 32.44 |
Training time | 3m9s | 3m12s | 4m17s | 5m53s | 4m55s | 4m7s | 9m20s | 5m5s | 5m00s |
Synthetic-NSVF
Wineholder | Steamtrain | Toad | Robot | Bike | Palace | Spaceship | Lifestyle | AVG | |
---|---|---|---|---|---|---|---|---|---|
PSNR | 31.64 | 36.47 | 35.57 | 37.10 | 37.87 | 37.41 | 35.58 | 34.76 | 35.80 |
SSIM | 0.962 | 0.987 | 0.980 | 0.994 | 0.990 | 0.977 | 0.980 | 0.967 | 0.980 |
LPIPS | 0.047 | 0.023 | 0.024 | 0.010 | 0.015 | 0.021 | 0.029 | 0.044 | 0.027 |
FPS | 47.07 | 75.17 | 50.42 | 64.87 | 66.88 | 28.62 | 35.55 | 22.84 | 48.93 |
Training time | 3m58s | 3m44s | 7m22s | 3m25s | 3m11s | 6m45s | 3m25s | 4m56s | 4m36s |
Tanks and Temples
Ignatius | Truck | Barn | Caterpillar | Family | AVG | |
---|---|---|---|---|---|---|
PSNR | 28.30 | 27.67 | 28.00 | 26.16 | 34.27 | 28.78 |
*FPS | 10.04 | 7.99 | 16.14 | 10.91 | 6.16 | 10.25 |
*Evaluated on test-traj
BlendedMVS
*Jade | *Fountain | Character | Statues | AVG | |
---|---|---|---|---|---|
PSNR | 25.43 | 26.82 | 30.43 | 26.79 | 27.38 |
**FPS | 26.02 | 21.24 | 35.99 | 19.22 | 25.61 |
Training time | 6m31s | 7m15s | 4m50s | 5m57s | 6m48s |
*I manually switch the background from black to white, so the number isn't directly comparable to that in the papers.
**Evaluated on test-traj
TODO
- use super resolution in GUI to improve FPS
- multi-sphere images as background