Project page β’ arXiv
Code release for our ICCV 2023 paper:
Brent Yi1, Weijia Zeng1, Sam Buchanan2, and Yi Ma1. Canonical Factors for Hybrid Neural Fields. International Conference on Computer Vision (ICCV), 2023. |
We study neural field architectures that rely on factored feature volumes, by (1) analyzing factored grids in 2D to characterize undesirable biases for axis-aligned signals, and (2) using the resulting insights to study TILTED, a family of hybrid neural field architectures that removes these biases.
This repository is structured as follows:
.
βββ tilted
β βββ core - Code shared between experiments. Factored grid
β β and neural decoder implementations.
β βββ nerf - Neural radiance field rendering, training, and
β β dataloading utilities.
β βββ rgb2d - 2D image reconstruction data and training
β β utilities.
β βββ sdf - Signed distance field dataloading, training, and
β meshing infrastructure.
β
βββ paper_commands - Commands used for running paper experiments (NeRF)
βββ paper_results - Output files used to generate paper tables. (NeRF)
β Contains hyperparameters, evaluation metrics,
β runtimes, etc.
β
βββ tables_nerf.ipynb - Table generation notebook for NeRF experiments.
β
βββ train_nerf.py - Training script for neural radiance field experiments.
βββ visualize_nerf.py - Visualize trained neural radiance fields.
β
βββ requirements.txt - Python dependencies.
Note that training scripts for 2D and SDF experiments have not yet been released. Feel free to reach out if you need these.
This repository has been tested with Python 3.8, jax==0.4.9
, and
jaxlib==0.4.9+cuda11.cudnn86
. We recommend first installing JAX via their
official instructions: https://github.com/google/jax#installation
We've packaged dependencies into a requirements.txt
file:
pip install -r requirements.txt
We use Tensorboard for logging.
After training, radiance fields can be interactively visualized. Helptext for the visualization script can be found via:
python visualize_nerf.py --help
As a runnable example, we've uploaded trained checkpoints for the Kitchen
dataset
here.
This can be unzipped in tilted/
and visualized via:
# Checkpoints can be selected via the dropdown on the right.
# The 'Reset Up Direction' button will also be when orbitting / panning!
python visualize_nerf.py ./example_checkpoints
The visualization script supports RGB, PCA, and feature norm visualization:
TILTED.Visualizer.mov
The core viewer infrastructure has been moved into nerfstudio-project/viser, which may be helpful if you're interested in visualization for other projects.
Meshes for SDF experiments were downloaded from alecjacobson/common-3d-test-models/.
All NeRF datasets were downloaded using
nerfstudio's
ns-download-data
command:
# Requires nerfstudio installation.
ns-download-data blender
ns-download-data nerfstudio all
Commands we used for training NeRF models in the paper can be found in
paper_commands/
.
Here are two examples, which should run at ~65 it/sec on an RTX 4090:
# Train a model on a synthetic scene.
python train_nerf.py blender-kplane-32c-axis-aligned --dataset-path {path_to_data}
# Train a model on a real scene.
python train_nerf.py nerfstudio-kplane-32c-axis-aligned --dataset-path {path_to_data}
The --help
flag can also be passed in to print helptext.
This is research code, so parts of it may be chaotic. We've put effort into refactor and cleanup before release, but there's always more work to do here! If you have questions or comments, please reach out.
Some notes:
- The global orientation can have a large impact on performance of baselines.
--render-config.global-rotate-seed INT
can be set intrain_nerf.py
to try a different global orientation; paper results sweep across0
,1
, and2
for each synthetic scene. - For speeding things up, the bottleneck training step count can be dropped
significantly without hurting performance. This is dictated by
--bottleneck.optim.projection-decay-start
and--bottleneck.optim.projection-decay-steps
; bottleneck training stop as soon as the projection LR hits 0. - Runtimes can vary significantly between machines. Our experiments were run using JAX
0.4.9
and CUDA11.8
on RTX 4090 GPUs.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant DGE 2146752. YM acknowledges partial support from the ONR grant N00014-22-1-2102, the joint Simons Foundation-NSF DMS grant 2031899, and a research grant from TBSI.
If any of it is useful, you can also cite:
@inproceedings{tilted2023,
author = {Yi, Brent and Zeng, Weijia and Buchanan, Sam and Ma, Yi},
title = {Canonical Factors for Hybrid Neural Fields},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2023},
}