• Stars
    star
    234
  • Rank 171,630 (Top 4 %)
  • Language
    Jupyter Notebook
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation

[arXiv] [Project Page] [BibTex]

Code release for the CVPR 2022 paper "AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation".

gen3d_teaser_crop.mp4

Installation

Please install pytorch and pytorch3d. Or you can setup the environment using conda:

conda env create -f autosdf.yaml
conda activate autosdf

However, the environments varies by each machine. We tested the code on Ubuntu 20.04, cuda=11.3, python=3.8.11, pytorch=1.9.0, pytorch3d=0.5.0.

Demo

We provide a jupyter notebook for demo. First download the pretrained weights from this link, and put them under saved_ckpt. Then start the notebook server with

jupyter notebook

And run:

  • demo_shape_comp.ipynb for shape completion
  • demo_single_view_recon.ipynb for single-view reconstruction
  • demo-lang-conditional.ipynb for language-guided generation

Preparing the Data

  1. ShapeNet

First you need to download the ShapeNetCore.v1 following the instruction of https://www.shapenet.org/account/. Put them under data/ShapeNet. Then unzip the downloaded zip file. We assume the path to the unzipped folder is data/ShapeNet/ShapeNetCore.v1. To extract SDF values, we followed the preprocessing steps from DISN.

  1. Pix3D

The Pix3D dataset can be downloaded here: https://github.com/xingyuansun/pix3d.

Training

  1. First train the P-VQ-VAE on ShapeNet:
./launchers/train_pvqvae_snet.sh
  1. Then extract the code for each sample of ShapeNet (caching them for training the transformer):
./launchers/extract_pvqvae_snet.sh
  1. Train the random-order-transformer to learn the shape prior:
./launchers/train_rand_tf_snet_code.sh
  1. To train the image marginal on Pix3D, first extract the code for each training data of Pix3D
./launchers/extract_pvqvae_pix3d.sh
  1. Train the image marginal on Pix3D
./launchers/train_resnet2vq_pix3d_img.sh

Issues and FAQ

1. Regarding mcubes functions

We originally use the implementation of the marching cubes from this repo: https://github.com/JustusThies/PyMarchingCubes. However, some of the dependencies seems to be outdated and makes the installation troublesome. Currently the quick workaround is installing mcubes from https://github.com/pmneila/PyMCubes:

pip install PyMCubes

and replace all the lines import marching_cubes as mcubes in our code with import mcubes.

Citing AutoSDF

If you find this code helpful, please consider citing:

@inproceedings{autosdf2022,
  title={{AutoSDF}: Shape Priors for 3D Completion, Reconstruction and Generation},
  author={Mittal, Paritosh and Cheng, Yen-Chi and Singh, Maneesh and Tulsiani, Shubham},
  booktitle={CVPR},
  year={2022}
}

Acknowledgement

This code borrowed heavily from Cycle-GAN, VQ-GAN. Thanks for the efforts for making their code available!