• Stars
    star
    588
  • Rank 76,022 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created about 1 year ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Flexible Isosurface Extraction for Gradient-Based Mesh Optimization (FlexiCubes)
Official PyTorch implementation

Teaser image

FlexiCubes is a high-quality isosurface representation specifically designed for gradient-based mesh optimization with respect to geometric, visual, or even physical objectives. For more details, please refer to our paper and project page.

Highlights

Getting Started

The core functions of FlexiCubes are now in Kaolin starting from v0.15.0. See installation instructions here and API documentations here

The original code of the paper is still visible in flexicube.py.

Example Usage

Gradient-Based Mesh Optimization

We provide examples demonstrating how to use FlexiCubes for reconstructing unknown meshes through gradient-based optimization. Specifically, starting from randomly initialized SDF, we optimize the shape towards the reference mesh by minimizing their geometric difference, measured by multiview mask and depth losses. This workflow is a simplified version of nvdiffrec with code largely borrowed from the nvdiffrec GitHub. We use the same pipeline to conduct the analysis in Section 3 and the main experiments described in Section 5 of our paper. We provide a detailed tutorial in examples/optimization.ipynb, along with an optimization script in examples/optimize.py which accepts command-line arguments.

To run the examples, it is suggested to install the Conda environment as detailed below:

conda create -n flexicubes python=3.9
conda activate flexicubes
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
pip install imageio trimesh tqdm matplotlib torch_scatter ninja
pip install git+https://github.com/NVlabs/nvdiffrast/
pip install kaolin==0.15.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.12.0_cu113.html

Then download the dataset collected by Myles et al. as follows. We include one shape in 'examples/data/inputmodels/block.obj' if you want to test without downloading the full dataset.

cd examples
python download_data.py

After downloading the data, run shape optimization with the following example command:

python optimize.py --ref_mesh data/inputmodels/block.obj --out_dir out/block

You can find visualization and output meshes in the out/block. Below, we show the initial and final shapes during optimization, with the reference shape on the right.

block_init

block_final

To further demonstrate the flexibility of our FlexiCubes representation, which can accommodates both reconstruction objectives and regularizers defined on the extracted mesh, you can add a developability regularizer (proposed by Stein et al.) to the previous reconstruction pipeline to encourage fabricability from panels:

python optimize.py --ref_mesh data/inputmodels/david.obj --out_dir out/david_dev --develop_reg True  --iter=1250

Extract mesh from known signed distance field

While not its designated use case, our function can extract a mesh from a known Signed Distance Field (SDF) without optimization. Please refer to the tutorial found in examples/extraction.ipynb for details.

Tips for using FlexiCubes

Regularization losses:

We commonly use three regularizers in our mesh optimization pipelines, referenced in lines L104-L106 in examples/optimize.py. The weights of these regularizers should be scaled according to the your application objectives. Initially, it is suggested to employ low weights because strong regularization can hinder convergence. You can incrementally increase the weights if you notice artifacts appearing in the optimized meshes. Specifically:

  • The loss function at L104 helps to remove floaters in areas of the shape that are not supervised by the application objective, such as internal faces when using image supervision only.
  • The L_dev loss at L105 can be increased if you observe artifacts in flat areas, as illustrated in the image below.
  • Generally, the L1 regularizer on flexible weights at L106 does not have a significant impact during the optimization of a single shape. However, we found it to be effective in stabilizing training in generative pipelines such as GET3D.

Ablating L_dev

Resolution of voxel grid vs. tetrahedral grid:

If you are switching from our previous work, DMTet, it's important to note the difference in grid resolution when compared to FlexiCubes. In both implementations, the resolution is defined by the edge length: a grid resolution of n means the grid edge length is 1/n for both the voxel and tetrahedral grids. However, a tetrahedral grid with a resolution of n contains only (n/2+1)Β³ grid vertices, in contrast to the (n+1)Β³ vertices in a voxel grid. Consequently, if you are switching from DMTet to FlexiCubes while maintaining the same resolution, you will notice not only a denser output mesh but also a substantial increase in computational cost. To align the triangle count in the output meshes more closely, we recommend adopting a 4:5 resolution ratio between the voxel grid and the tetrahedral grid. For instance, in our paper, 64Β³ FlexiCubes generate approximately the same number of triangles as 80Β³ DMTet.

Applications

FlexiCubes is now integrated into NVIDIA applications as a drop-in replacement for DMTet. You can visit their GitHub pages to see how FlexiCubes is used in advanced photogrammetry and 3D generative pipelines.

Extracting Triangular 3D Models, Materials, and Lighting From Images (nvdiffrec)

GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images

License

Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This work is made available under the Nvidia Source Code License.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing.

Citation

@article{shen2023flexicubes,
author = {Shen, Tianchang and Munkberg, Jacob and Hasselgren, Jon and Yin, Kangxue and Wang, Zian 
        and Chen, Wenzheng and Gojcic, Zan and Fidler, Sanja and Sharp, Nicholas and Gao, Jun},
title = {Flexible Isosurface Extraction for Gradient-Based Mesh Optimization},
year = {2023},
issue_date = {August 2023},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {42},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3592430},
doi = {10.1145/3592430},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {37},
numpages = {16}
}

More Repositories

1

GET3D

Python
4,208
star
2

lift-splat-shoot

Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D (ECCV 2020)
Python
986
star
3

GSCNN

Gated-Shape CNN for Semantic Segmentation (ICCV 2019)
Python
916
star
4

nglod

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes (CVPR 2021 Oral)
Python
857
star
5

LION

Latent Point Diffusion Models for 3D Shape Generation
Python
754
star
6

NKSR

[CVPR 2023 Highlight] Neural Kernel Surface Reconstruction
Python
751
star
7

ASE

Python
745
star
8

DIB-R

Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer (NeurIPS 2019)
Python
655
star
9

editGAN_release

Python
629
star
10

STEAL

STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)
Jupyter Notebook
477
star
11

datasetGAN_release

Python
340
star
12

ATISS

Code for "ATISS: Autoregressive Transformers for Indoor Scene Synthesis", NeurIPS 2021
Python
255
star
13

XCube

[CVPR 2024 Highlight] XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies
Python
240
star
14

vqad

225
star
15

vid2player3d

Official implementation for SIGGRAPH 2023 paper "Learning Physically Simulated Tennis Skills from Broadcast Videos"
Python
223
star
16

GameGAN_code

Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)
Python
222
star
17

CLD-SGM

Score-Based Generative Modeling with Critically-Damped Langevin Diffusion
Python
194
star
18

semanticGAN_code

Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/
Python
180
star
19

meta-sim

Meta-Sim: Learning to Generate Synthetic Datasets (ICCV 2019)
Python
171
star
20

DefTet

Learning Deformable Tetrahedral Meshes for 3D Reconstruction (NeurIPS 2020)
Cuda
169
star
21

PADL

105
star
22

STRIVE

Code for CVPR 2022 paper "Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic Prior"
Python
104
star
23

DriveGAN_code

Code release for DriveGAN (CVPR 2021)
CSS
93
star
24

3DiffTection

92
star
25

GENIE

GENIE: Higher-Order Denoising Diffusion Solvers
Python
88
star
26

bigdatasetgan_code

project page: https://nv-tlabs.github.io/big-datasetgan/
Python
87
star
27

stmc

Implementation of "Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation" from CVPR Workshop on Human Motion Generation 2024.
Python
77
star
28

DPDM

Differentially Private Diffusion Models
Python
76
star
29

AUV-NET

Python
75
star
30

DIB-R-Single-Image-3D-Reconstruction

Python
73
star
31

trace

Official implementation of TRACE, the TRAjectory Diffusion Model for Controllable PEdestrians, from the CVPR 2023 paper: "Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion".
Python
68
star
32

pacer

Official implementation of PACER, Pedestrian Animation ControllER, of CVPR 2023 paper: "Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion".
Python
57
star
33

planning-centric-metrics

Learning to Evaluate Perception Models Using Planner-Centric Metrics
Python
52
star
34

DiffusionTexturePainting

[SIGGRAPH 2024] Diffusion Texture Painting
Python
51
star
35

editGAN

43
star
36

meta-sim-structure

Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation (ECCV 2020)
31
star
37

GANverse3D

27
star
38

gameGAN

Project page for GameGAN
CSS
26
star
39

VideoLDM

HTML
24
star
40

brushstroke_engine

Code accompanying Neural Brushstroke Engine paper, SIGGRAPH Asia 2022
Jupyter Notebook
23
star
41

3DStyleNet

18
star
42

nv-tlabs.github.io

NVIDIA Toronto AI Lab public website
HTML
16
star
43

fDAL

Python
14
star
44

MvDeCor

Python
13
star
45

semanticGAN

https://nv-tlabs.github.io/semanticGAN/
13
star
46

compact-ngp

13
star
47

fed-sim

Federated Simulation for Medical Imaging (MICCAI2020)
11
star
48

DP-Sinkhorn_code

Python
11
star
49

DMTet

HTML
10
star
50

big-datasetgan

https://nv-tlabs.github.io/big-datasetgan/
HTML
9
star
51

datasetGAN

8
star
52

fegr

HTML
8
star
53

NTG

NTG - Neural Turtle Graphics for Modeling City Road Layouts (ICCV 2019)
8
star
54

inverse-rendering-3d-lighting

Project page for "Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting" (ICCV 2021)
7
star
55

flexicubes_website

5
star
56

tesmo

Official implementation of TeSMo, a method for text-controlled scene-aware motion generation, from the ECCV 2024 paper: "Generating Human Interaction Motions in Scenes with Text Control".
5
star
57

nkf

Project page of Neural Fields as Learnable Kernels for 3D Reconstruction.
HTML
4
star
58

XDGAN

XDGAN: Multi-Modal 3D Shape Generation in 2D Space
HTML
4
star
59

DriveGAN

CSS
3
star
60

physics-pose-estimation-project-page

HTML
3
star
61

outdoor-ar

HTML
3
star
62

hipnet

CSS
3
star
63

simulation-strategies

Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation
2
star
64

equivariant

CSS
2
star
65

estimatingrequirements

Project page for the paper "How Much More Data Do I Need? Estimating Requirements For Downstream Tasks".
HTML
2
star
66

adaptive-shells-website

HTML
2
star
67

LearnOptimizeCollect

Project page for the paper "Optimizing Data Collection In Machine Learning"
HTML
1
star
68

DP-Sinkhorn

Project page for DP-Sinkhorn (Neurips 2021)
HTML
1
star
69

PMGAN

CSS
1
star
70

hugo-backend

hugo backend for the main page
Shell
1
star
71

lip-mlp

HTML
1
star
72

unicon

HTML
1
star
73

DIBRPlus

1
star