• Stars
    star
    402
  • Rank 107,380 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 4 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Pytorch implementation of DiffusionNet for fast and robust learning on 3D surfaces like meshes or point clouds.

DiffusionNet is a general-purpose method for deep learning on surfaces such as 3D triangle meshes and point clouds. It is well-suited for tasks like segmentation, classification, feature extraction, etc.

Why try DiffusionNet?

  • It is efficient and scalable. On a single GPU, we can easily train on meshes of 20k vertices, and infer on meshes with 200k vertices. One-time preprocessing takes a few seconds in the former case, and about a minute in the latter.
  • It is sampling agnostic. Many graph-based mesh learning approaches tend to overfit to mesh connectivity, and can output nonsense when you run them on meshes that are triangulated differently from the training set. With DiffusionNet we can intermingle different triangulations and very coarse or fine meshes without issue. No special regularization or data augmentation needed!
  • It is representation agnostic. For instance, you can train on a mesh and infer on a point cloud, or mix meshes and point clouds in the training set.
  • It is robust. DiffusionNet avoids potentially-brittle geometric operations, and does not impose any assumptions such as manifoldness, etc.
  • It is data efficient. DiffusionNet can learn from 10s of models, even without any data augmentation.

DiffusionNet is described in the paper "DiffusionNet: Discretization Agnostic Learning on Surfaces", by

network diagram

Outline

  • diffusion_net/src implementation of the method, including preprocessing, layers, etc
  • experiments examples and scripts to reproduce experiments from the DiffusionNet paper
  • environment.yml A conda environment file which can be used to install packages.

Prerequisites

DiffusionNet depends on pytorch, as well as a handful of other fairly typical numerical packages. These can usually be installed manually without much trouble, but alternately a conda environment file is also provided (see conda documentation for additional instructions). These package versions were tested with CUDA 10.1 and 11.1.

conda env create --name diffusion_net -f environment.yml

The code assumes a GPU with CUDA support. DiffusionNet has minimal memory requirements; >4GB GPU memory should be sufficient.

Applying DiffusionNet to your task

The DiffusionNet class can be applied to meshes or point clouds. The basic recipe looks like:

import diffusion_net

# Here we use Nx3 positions as features. Any other features you might have will work!
# See our experiments for the use of of HKS features, which are naturally 
# invariant to (isometric) deformations.
C_in = 3

# Output dimension (e.g., for a 10-class segmentation problem)
C_out = 10 

# Create the model
model = diffusion_net.layers.DiffusionNet(
            C_in=C_in,
            C_out=n_class,
            C_width=128, # internal size of the diffusion net. 32 -- 512 is a reasonable range
            last_activation=lambda x : torch.nn.functional.log_softmax(x,dim=-1), # apply a last softmax to outputs 
                                                                                  # (set to default None to output general values in R^{N x C_out})
            outputs_at='vertices')

# An example epoch loop.
# For a dataloader example see experiments/human_segmentation_original/human_segmentation_original_dataset.py
for sample in your_dataset:
    
    verts = sample.vertices  # (Vx3 array of vertices)
    faces = sample.faces     # (Fx3 array of faces, None for point cloud) 
    
    # center and unit scale
    verts = diffusion_net.geometry.normalize_positions(verts)
    
    # Get the geometric operators needed to evaluate DiffusionNet. This routine 
    # automatically populates a cache, precomputing only if needed.
    # TIP: Do this once in a dataloader and store in memory to further improve 
    # performance; see examples.
    frames, mass, L, evals, evecs, gradX, gradY = \
        get_operators(verts, faces, op_cache_dir='my/cache/directory/')
    
    # this example uses vertex positions as features 
    features = verts
    
    # Forward-evaluate the model
    # preds is a NxC_out array of values
    outputs = model(features, mass, L=L, evals=evals, evecs=evecs, gradX=gradX, gradY=gradY, faces=faces)
    
    # Now do whatever you want! Apply your favorite loss function, 
    # backpropgate with loss.backward() to train the DiffusionNet, etc. 

See the examples in experiments/ for complete examples, including dataloaders, other features, optimizers, etc. Please feel free to file an issue to discuss applying DiffusionNet to your problem!

Tips and Tricks

By default, DiffusionNet uses spectral acceleration for fast performance, which requires some CPU-based precomputation to compute operators & eigendecompositions for each input, which can take a few seconds for moderately sized inputs. DiffusionNet will be fastest if this precomputation only needs to be performed once for the dataset, rather than for each input.

  • If you are learning on a template mesh, consider precomputing operators for the reference pose of the template, but then using xyz the coordinates of the deformed pose as inputs to the network. This is a slight approximation, but will make DiffusionNet very fast, since the precomputed operators are shared among all poses.
  • If you need data augmentation, try to apply augmentations after computing operators whenever possible. For instance, in our examples, we apply random rotation to positions, but only after computing operators. Note that we find common augmentations such as slightly skewing/scaling/subsampling inputs are generally unnecessary with DiffusionNet.

Thanks

Parts of this work were generously supported by the Fields Institute for Mathematics, the Vector Institute, ERC Starting Grant No. 758800 (EXPROTEA) the ANR AI Chair AIGRETTE, a Packard Fellowship, NSF CAREER Award 1943123, an NSF Graduate Research Fellowship, and gifts from Activision Blizzard, Adobe, Disney, Facebook, and nTopology. The dataset loaders mimic code from HSN, pytorch-geometric, and probably indirectly from other sources too. Thank you!

More Repositories

1

polyscope

A C++ & Python viewer for 3D data like meshes and point clouds
C++
1,789
star
2

geometry-central

Applied 3D geometry in C++, with a focus on surface meshes.
C++
1,068
star
3

potpourri3d

An invigorating blend of 3D geometry tools in Python.
Python
411
star
4

happly

A C++ header-only parser for the PLY file format. Parse .ply happily!
C++
306
star
5

robust-laplacians-py

Build high-quality Laplace matrices on meshes and point clouds in Python. Implements [Sharp & Crane SGP 2020].
C++
199
star
6

neural-implicit-queries

Queries on neural implicit surfaces via range analysis: ray casting, intersection, closest point, & more. SIGGRAPH 2022 paper. JAX implementation.
Python
172
star
7

neural-physics-subspaces

Fit low-dimensional subspaces to physical systems with neural networks (SIGGRAPH 2023)
Python
151
star
8

nonmanifold-laplacian

A robust Laplace matrix for general (possibly nonmanifold) triangle meshes, and point clouds [Sharp & Crane SGP 2020]
C++
125
star
9

DDGSpring2016

Code repository for 15-869 Discrete Differential Geometry at CMU in Spring 2016.
Python
121
star
10

intrinsic-triangulations-tutorial

An introductory course intrinsic triangulations for powerful & robust geometry processing --- tutorial code and links.
Python
121
star
11

learned-triangulation

Source code for "PointTriNet: Learned Triangulation of 3D Point Sets", by Nicholas Sharp and Maks Ovsjanikov at ECCV 2020
Python
104
star
12

variational-surface-cutting

Codebase for "Variational Surface Cutting" by Sharp & Crane, SIGGRAPH 2018
C++
90
star
13

flip-geodesics-demo

Construct geodesic paths, loops, networks on surface with a fast and simple edge flipping algorithm. C++ demo app and more.
C++
90
star
14

vector-heat-demo

C++ demo of the Vector Heat Method (Sharp, Soliman, and Crane. 2019.)
C++
59
star
15

gc-polyscope-project-template

A template project to get started with geometry-central and Polyscope.
C++
47
star
16

navigating-intrinsic-triangulations-demo

Demo code for "Navigating Intrinsic Triangulations". Sharp, Soliman, and Crane. 2019
C++
47
star
17

polyscope-py

Python bindings for Polyscope
Python
33
star
18

arrgh

A small python utility to pretty-print a table summarizing arrays & scalars from numpy, pytorch, etc.
Python
26
star
19

discretization-robust-correspondence-benchmark

Benchmark for the generalization of 3D machine learning models across different remeshing/samplings of a surface.
Python
16
star
20

geometry-central-tutorials

Tutorials for the geometry-central geometry processing library.
C++
11
star
21

libigl-polyscope-project-template

An example project and build system using libIGL and Polyscope
CMake
8
star
22

RNA-Surface-Segmentation-Dataset

A dataset of segmented RNA molecule surfaces, as a benchmark task in 3D machine learning on surfaces. From Poulenard et al., 3DV 2019.
8
star
23

recipes

Food, food, food
HTML
4
star
24

polyscope-docs

Documentation for polyscope
HTML
3
star
25

nmwsharp.github.io

HTML
1
star