• Stars
    star
    289
  • Rank 143,419 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 3 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

ACORN: Adaptive Coordinate Networks for Neural Scene Representation | SIGGRAPH 2021

ACORN: Adaptive Coordinate Networks for Neural Scene Representation
SIGGRAPH 2021

Project Page | Video | Paper

PyTorch implementation of ACORN.
ACORN: Adaptive Coordinate Networks for Neural Scene Representation
Julien N. P. Martel*, David B. Lindell*, Connor Z. Lin, Eric R. Chan, Marco Monteiro, Gordon Wetzstein
Stanford University
*denotes equal contribution
in SIGGRAPH 2021

Quickstart

To setup a conda environment, download example training data, begin the training process, and launch Tensorboard, follow the below commands. As part of this you will also need to register for and install an academic license for the Gurobi optimizer (this is free for academic use).

conda env create -f environment.yml
# before proceeding, install Gurobi optimizer license (see above web link)
conda activate acorn 
cd inside_mesh
python setup.py build_ext --inplace
cd ../experiment_scripts
python train_img.py --config ./config_img/config_pluto_acorn_1k.ini
tensorboard --logdir=../logs --port=6006

This example will fit 1 MP image of Pluto. You can monitor the training in your browser at localhost:6006.

Adaptive Coordinate Networks

An adaptive coordinate network learns an adaptive decomposition of the signal domain, allowing the network to fit signals faster and more accurately. We demonstrate using ACORN to fit large-scale images and detailed 3D occupancy fields.

Datasets

Image and 3D model datasets should be downloaded and placed in the data directory. The datasets used in the paper can be accessed as follows.

Training

To use ACORN, first set up the conda environment and build the Cython extension with

conda env create -f environment.yml
conda activate acorn 
cd inside_mesh
python setup.py build_ext --inplace

Then, download the datasets to the data folder.

We use Gurobi to perform solve the integer linear program used in the optimization. A free academic license can be installed from this link.

To train image representations, use the config files in the experiment_scripts/config_img folder. For example, to train on the Pluto image, run the following

python train_img.py --config ./config_img/config_pluto_1k.ini
tensorboard --logdir=../logs/ --port=6006

After the image representation has been trained, the decomposition and images can be exported using the following command.

python train_img.py --config ../logs/<experiment_name>/config.ini --resume ../logs/<experiment_name> <iteration #> --eval

Exported images will appear in the ../logs/<experiment_name>/eval folder, where <experiment_name> is the subdirectory in the log folder corresponding to the particular training run.

To train 3D models, download the datasets, and then use the corresponding config file in experiment_scripts/config_occupancy. For example, a small model representing the Lucy statue can be trained with

python train_occupancy.py --config ./config_occupancy/config_lucy_small_acorn.ini

Then a mesh of the final model can be exported with

python train_occupancy.py --config ../logs/<experiment_name>/config.ini --load ../logs/<experiment_name> --export

This will create a .dae mesh file in the ../logs/<experiment_name> folder.

Citation

@article{martel2021acorn,
  title={ACORN: {Adaptive} coordinate networks for neural scene representation},
  author={Julien N. P. Martel and David B. Lindell and Connor Z. Lin and Eric R. Chan and Marco Monteiro and Gordon Wetzstein},
  journal={ACM Trans. Graph. (SIGGRAPH)},
  volume={40},
  number={4},
  year={2021},
}

Acknowledgments

We include the MIT licensed inside_mesh code in this repo from Lars Mescheder, Michael Oechsle, Michael Niemeyer, Andreas Geiger, and Sebastian Nowozin, which is originally included in their Occupancy Networks repository.

J.N.P. Martel was supported by a Swiss National Foundation (SNF) Fellowship (P2EZP2 181817). C.Z. Lin was supported by a David Cheriton Stanford Graduate Fellowship. G.W. was supported by an Okawa Research Grant, a Sloan Fellowship, and a PECASE by the ARO. Other funding for the project was provided by NSF (award numbers 1553333 and 1839974).

Errata

  • The 3D shape fitting metrics were reported in the paper as calculated using the Chamfer-L1 distance. The metric should have been labeled Chamfer-L2, which is consistent with the implementation in this repository.

More Repositories

1

GSM

Gaussian Shell Maps for Efficient 3D Human Generation (CVPR 2024)
Jupyter Notebook
196
star
2

automatic-integration

Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021
Python
179
star
3

bacon

Official respository for "Band-limited Coordinate Networks for Multiscale Scene Representation" | CVPR 2022
Python
173
star
4

neural-holography

Code and data for Neural Holography
Python
162
star
5

opticalCNN

hybrid optical electronic convolutional neural networks
Jupyter Notebook
124
star
6

nlos-fk

Processing code for "Wave-Based Non-Line-of-Sight Imaging using Fast f-k Migration"
MATLAB
70
star
7

holographic-AR-glasses

Python
61
star
8

AcousticNLOS

Processing code for acoustic non-line-of-sight imaging
Python
56
star
9

DepthFromDefocusWithLearnedOptics

ICCP2021: Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation
Python
54
star
10

DeepOpticsHDR

Code associated with the paper "Deep Optics for Single-shot High-dynamic-range Imaging"
Python
52
star
11

neural-3d-holography

Code and data for Neural 3D Holography | SIGGRAPH Asia 2021
Python
44
star
12

GraphPDE

Jupyter Notebook
43
star
13

confocal-diffuse-tomography

Code and data for "Three-dimensional imaging through scattering media based on confocal diffuse tomography"
Python
30
star
14

ThreeDeconv.jl

A convex 3D deconvolution algorithm for low photon count fluorescence imaging
Julia
30
star
15

partially_coherent_neural_holography

Python
26
star
16

KeyholeImaging

Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path"
Python
24
star
17

olas

Overlap-Add Stereograms Source Code
MATLAB
20
star
18

nlos-dlct

Non-line-of-sight Surface Reconstruction Using the Directional LCT
MATLAB
19
star
19

time-multiplexed-neural-holography

Code and data for Time-multiplexed Neural Holography | SIGGRAPH 2022
Python
19
star
20

diffusion-in-the-dark

Repository for Diffusion in the Dark (WACV 2024)
Jupyter Notebook
17
star
21

single_spad_depth

Code for Disambiguating Monocular Depth Estimation with a Single Transient
Jupyter Notebook
11
star
22

spad_pileup

MATLAB
11
star
23

EE267-Spring2022

JavaScript
7
star
24

DeepS3PR

Code associated with the paper "Deep S3PR: Simultaneous Source Separation and Phase Retrieval Using Deep Generative Models"
Python
6
star
25

multishot-localization-microscopy

Python
4
star
26

spad_single

Jupyter Notebook
4
star
27

PixelRNN

Official Implementation of PixelRNN: In-Pixel Recurrent Neural Networks for End-to-end--optimized Perception with Neural Sensors
Python
2
star
28

EE267-Spring2024

A repository for the starter code of homework for EE267.
JavaScript
1
star