• Stars
    star
    179
  • Rank 214,039 (Top 5 %)
  • Language
    Python
  • Created over 3 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering
CVPR 2021

Project Page | Video | Paper

Open Colab
PyTorch implementation of automatic integration.
AutoInt: Automatic Integration for Fast Neural Volume Rendering
David B. Lindell*, Julien N. P. Martel*, Gordon Wetzstein
Stanford University
*denotes equal contribution
in CVPR 2021

Quickstart

To get started quickly, we provide a colab link above. Otherwise, you can clone this repo and follow the below instructions.

To setup a conda environment, download example training data, begin the training process, and launch Tensorboard:

conda env create -f environment.yml
conda activate autoint 
cd experiment_scripts
python train_1d_integral.py
tensorboard --logdir=../logs --port=6006

This example will fit a grad network to a 1D signal and evaluate the integral. You can monitor the training in your browser at localhost:6006. You can also train a network on the sparse tomography problem presented in the paper with python train_sparse_tomography.py.

Autoint for Neural Rendering

Automatic integration can be used to learn closed form solutions to the volume rendering equation, which is an integral equation accumulates transmittance and emittance along rays to render an image. While conventional neural renderers require hundreds of samples along each ray to evaluate these integrals (and hence hundreds of costly forward passes through a network), AutoInt allows evaluating these integrals far fewer forward passes.

Training

To run AutoInt for neural rendering, first set up the conda environment with

conda env create -f environment.yml
conda activate autoint 

Then, download the datasets to the data folder. We allow training on any of three datasets. The synthetic Blender data from NeRF and the LLFF scenes are hosted here. The DeepVoxels data are hosted here.

Finally, use the provided config files in the experiment_scripts/configs folder to train on these datasets. For example, to train on a NeRF Blender dataset, run the following

python train_autoint_radiance_field.py --config ./configs/config_blender_tiny.ini
tensorboard --logdir=../logs/ --port=6006

This will train a small, low-resolution scene. To train scenes at high-resolution (requires a few days of training time), use the config_blender.ini, config_deepvoxels.ini, or config_llff.ini config files.

Rendering

Rendering from a trained model can be done with the following command.

python train_autoint_radiance_field.py --config /path/to/config/file --render_model ../logs/path/to/log/directory <epoch number> --render_output /path/to/output/folder

Here, the --render_model command indicates the log directory where the code saves the models and checkpoints. For example, this would be ../logs/blender_lego for the default Blender dataset. Then, the epoch number can be found by looking at numbers of the the saved checkpoint filenames in ../logs/blender_lego/checkpoints/. Finally, --render_output should specify a folder where the output rendered images will be generated.

Citation

@inproceedings{autoint2021,
  title={AutoInt: Automatic Integration for Fast Neural Volume Rendering},
  author={David B. Lindell and Julien N. P. Martel and Gordon Wetzstein},
  year={2021},
  booktitle={Proc. CVPR},
}

More Repositories

1

ACORN

ACORN: Adaptive Coordinate Networks for Neural Scene Representation | SIGGRAPH 2021
Python
289
star
2

GSM

Gaussian Shell Maps for Efficient 3D Human Generation (CVPR 2024)
Jupyter Notebook
196
star
3

bacon

Official respository for "Band-limited Coordinate Networks for Multiscale Scene Representation" | CVPR 2022
Python
173
star
4

neural-holography

Code and data for Neural Holography
Python
162
star
5

opticalCNN

hybrid optical electronic convolutional neural networks
Jupyter Notebook
124
star
6

nlos-fk

Processing code for "Wave-Based Non-Line-of-Sight Imaging using Fast f-k Migration"
MATLAB
70
star
7

holographic-AR-glasses

Python
61
star
8

AcousticNLOS

Processing code for acoustic non-line-of-sight imaging
Python
56
star
9

DepthFromDefocusWithLearnedOptics

ICCP2021: Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation
Python
54
star
10

DeepOpticsHDR

Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging"
Python
52
star
11

neural-3d-holography

Code and data for Neural 3D Holography | SIGGRAPH Asia 2021
Python
44
star
12

GraphPDE

Jupyter Notebook
43
star
13

confocal-diffuse-tomography

Code and data for "Three-dimensional imaging through scattering media based on confocal diffuse tomography"
Python
30
star
14

ThreeDeconv.jl

A convex 3D deconvolution algorithm for low photon count fluorescence imaging
Julia
30
star
15

partially_coherent_neural_holography

Python
26
star
16

KeyholeImaging

Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path"
Python
24
star
17

olas

Overlap-Add Stereograms Source Code
MATLAB
20
star
18

nlos-dlct

Non-line-of-sight Surface Reconstruction Using the Directional LCT
MATLAB
19
star
19

time-multiplexed-neural-holography

Code and data for Time-multiplexed Neural Holography | SIGGRAPH 2022
Python
19
star
20

diffusion-in-the-dark

Repository for Diffusion in the Dark (WACV 2024)
Jupyter Notebook
17
star
21

single_spad_depth

Code for Disambiguating Monocular Depth Estimation with a Single Transient
Jupyter Notebook
11
star
22

spad_pileup

MATLAB
11
star
23

EE267-Spring2022

JavaScript
7
star
24

DeepS3PR

Code associated with the paper "Deep S3PR: Simultaneous Source Separation and Phase Retrieval Using Deep Generative Models"
Python
6
star
25

multishot-localization-microscopy

Python
4
star
26

spad_single

Jupyter Notebook
4
star
27

PixelRNN

Official Implementation of PixelRNN: In-Pixel Recurrent Neural Networks for End-to-end--optimized Perception with Neural Sensors
Python
2
star
28

EE267-Spring2024

A repository for the starter code of homework for EE267.
JavaScript
1
star