DirectVoxGO
Direct Voxel Grid Optimization (CVPR2022 Oral, project page, DVGO paper, DVGO v2 paper).
github_teaser_inward_bounded.mp4
github_teaser_forward_facing.mp4
Custom casual capturing
A short guide to capture custom forward-facing scenes and rendering fly-through videos.
Below are two rgb and depth fly-through videos from custom captured scenes.
casual_capturing.mp4
Features
- Speedup NeRF by replacing the MLP with the voxel grid.
- Simple scene representation:
- Volume densities: dense voxel grid (3D).
- View-dependent colors: dense feature grid (4D) + shallow MLP.
- Pytorch cuda extention built just-in-time for another 2--3x speedup.
- O(N) realization for the distortion loss proposed by mip-nerf 360.
- The loss improves our training time and quality.
- We have released a self-contained pytorch package: torch_efficient_distloss.
- Consider a batch of 8192 rays X 256 points.
- GPU memory consumption: 6192MB => 96MB.
- Run times for 100 iters: 20 sec => 0.2sec.
- Supported datasets:
- Bounded inward-facing: NeRF, NSVF, BlendedMVS, T&T (masked), DeepVoxels.
- Unbounded inward-facing: T&T, LF, mip-NeRF360.
- Foward-facing: LLFF.
Installation
git clone [email protected]:sunset1995/DirectVoxGO.git
cd DirectVoxGO
pip install -r requirements.txt
Pytorch and torch_scatter installation is machine dependent, please install the correct version for your machine.
Dependencies (click to expand)
PyTorch
,numpy
,torch_scatter
: main computation.scipy
,lpips
: SSIM and LPIPS evaluation.tqdm
: progress bar.mmcv
: config system.opencv-python
: image processing.imageio
,imageio-ffmpeg
: images and videos I/O.Ninja
: to build the newly implemented torch extention just-in-time.einops
: torch tensor shaping with pretty api.torch_efficient_distloss
: O(N) realization for the distortion loss.
Directory structure for the datasets
(click to expand;)
data
βββ nerf_synthetic # Link: https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1
β βββ [chair|drums|ficus|hotdog|lego|materials|mic|ship]
β βββ [train|val|test]
β β βββ r_*.png
β βββ transforms_[train|val|test].json
β
βββ Synthetic_NSVF # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NSVF.zip
β βββ [Bike|Lifestyle|Palace|Robot|Spaceship|Steamtrain|Toad|Wineholder]
β βββ intrinsics.txt
β βββ rgb
β β βββ [0_train|1_val|2_test]_*.png
β βββ pose
β βββ [0_train|1_val|2_test]_*.txt
β
βββ BlendedMVS # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/BlendedMVS.zip
β βββ [Character|Fountain|Jade|Statues]
β βββ intrinsics.txt
β βββ rgb
β β βββ [0|1|2]_*.png
β βββ pose
β βββ [0|1|2]_*.txt
β
βββ TanksAndTemple # Link: https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip
β βββ [Barn|Caterpillar|Family|Ignatius|Truck]
β βββ intrinsics.txt
β βββ rgb
β β βββ [0|1|2]_*.png
β βββ pose
β βββ [0|1|2]_*.txt
β
βββ deepvoxels # Link: https://drive.google.com/drive/folders/1ScsRlnzy9Bd_n-xw83SP-0t548v63mPH
β βββ [train|validation|test]
β βββ [armchair|cube|greek|vase]
β βββ intrinsics.txt
β βββ rgb/*.png
β βββ pose/*.txt
β
βββ nerf_llff_data # Link: https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1
β βββ [fern|flower|fortress|horns|leaves|orchids|room|trex]
β
βββ tanks_and_temples # Link: https://drive.google.com/file/d/11KRfN91W1AxAW6lOFs4EeYDbeoQZCi87/view?usp=sharing
β βββ [tat_intermediate_M60|tat_intermediate_Playground|tat_intermediate_Train|tat_training_Truck]
β βββ [train|test]
β βββ intrinsics/*txt
β βββ pose/*txt
β βββ rgb/*jpg
β
βββ lf_data # Link: https://drive.google.com/file/d/1gsjDjkbTh4GAR9fFqlIDZ__qR9NYTURQ/view?usp=sharing
β βββ [africa|basket|ship|statue|torch]
β βββ [train|test]
β βββ intrinsics/*txt
β βββ pose/*txt
β βββ rgb/*jpg
β
βββ 360_v2 # Link: https://jonbarron.info/mipnerf360/
β βββ [bicycle|bonsai|counter|garden|kitchen|room|stump]
β βββ poses_bounds.npy
β βββ [images_2|images_4]
β
βββ nerf_llff_data # Link: https://drive.google.com/drive/folders/14boI-o5hGO9srnWaaogTU5_ji7wkX2S7
β βββ [fern|flower|fortress|horns|leaves|orchids|room|trex]
β βββ poses_bounds.npy
β βββ [images_2|images_4]
β
βββ co3d # Link: https://github.com/facebookresearch/co3d
βββ [donut|teddybear|umbrella|...]
βββ frame_annotations.jgz
βββ set_lists.json
βββ [129_14950_29917|189_20376_35616|...]
βββ images
β βββ frame*.jpg
βββ masks
βββ frame*.png
GO
-
Training
$ python run.py --config configs/nerf/lego.py --render_test
Use
--i_print
and--i_weights
to change the log interval. -
Evaluation To only evaluate the testset
PSNR
,SSIM
, andLPIPS
of the trainedlego
without re-training, run:$ python run.py --config configs/nerf/lego.py --render_only --render_test \ --eval_ssim --eval_lpips_vgg
Use
--eval_lpips_alex
to evaluate LPIPS with pre-trained Alex net instead of VGG net. -
Render video
$ python run.py --config configs/nerf/lego.py --render_only --render_video
Use
--render_video_factor 4
for a fast preview. -
Reproduction: all config files to reproduce our results.
(click to expand)
$ ls configs/* configs/blendedmvs: Character.py Fountain.py Jade.py Statues.py configs/nerf: chair.py drums.py ficus.py hotdog.py lego.py materials.py mic.py ship.py configs/nsvf: Bike.py Lifestyle.py Palace.py Robot.py Spaceship.py Steamtrain.py Toad.py Wineholder.py configs/tankstemple: Barn.py Caterpillar.py Family.py Ignatius.py Truck.py configs/deepvoxels: armchair.py cube.py greek.py vase.py configs/tankstemple_unbounded: M60.py Playground.py Train.py Truck.py configs/lf: africa.py basket.py ship.py statue.py torch.py configs/nerf_unbounded: bicycle.py bonsai.py counter.py garden.py kitchen.py room.py stump.py configs/llff: fern.py flower.py fortress.py horns.py leaves.py orchids.py room.py trex.py
Custom casually captured scenes
Coming soon hopefully.
Development and tuning guide
Extention to new dataset
Adjusting the data related config fields to fit your camera coordinate system is recommend before implementing a new one. We provide two visualization tools for debugging.
- Inspect the camera and the allocated BBox.
- Export via
--export_bbox_and_cams_only {filename}.npz
:python run.py --config configs/nerf/mic.py --export_bbox_and_cams_only cam_mic.npz
- Visualize the result:
python tools/vis_train.py cam_mic.npz
- Export via
- Inspect the learned geometry after coarse optimization.
- Export via
--export_coarse_only {filename}.npz
(assumedcoarse_last.tar
available in the train log):python run.py --config configs/nerf/mic.py --export_coarse_only coarse_mic.npz
- Visualize the result:
python tools/vis_volume.py coarse_mic.npz 0.001 --cam cam_mic.npz
- Export via
Inspecting the cameras & BBox | Inspecting the learned coarse volume |
---|---|
Speed and quality tradeoff
We have reported some ablation experiments in our paper supplementary material.
Setting N_iters
, N_rand
, num_voxels
, rgbnet_depth
, rgbnet_width
to larger values or setting stepsize
to smaller values typically leads to better quality but need more computation.
The weight_distortion
affects the training speed and quality as well.
Only stepsize
is tunable in testing phase, while all the other fields should remain the same as training.
Advanced data structure
- Octree β Plenoxels: Radiance Fields without Neural Networks.
- Hash β Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.
- Factorized components β TensoRF: Tensorial Radiance Fields.
You will need them for scaling to a higher grid resolution. But we believe our simplest dense grid could still be your good starting point if you have other challenging problems to deal with.
Acknowledgement
The code base is origined from an awesome nerf-pytorch implementation, but it becomes very different from the code base now.