• Stars
    star
    889
  • Rank 49,313 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created over 3 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)

The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 Spotlight Paper)



NVAE is a deep hierarchical variational autoencoder that enables training SOTA likelihood-based generative models on several image datasets.

Requirements

NVAE is built in Python 3.7 using PyTorch 1.6.0. Use the following command to install the requirements:

pip install -r requirements.txt

Set up file paths and data

We have examined NVAE on several datasets. For large datasets, we store the data in LMDB datasets for I/O efficiency. Click below on each dataset to see how you can prepare your data. Below, $DATA_DIR indicates the path to a data directory that will contain all the datasets and $CODE_DIR refers to the code directory:

MNIST and CIFAR-10

These datasets will be downloaded automatically, when you run the main training for NVAE using train.py for the first time. You can use --data=$DATA_DIR/mnist or --data=$DATA_DIR/cifar10, so that the datasets are downloaded to the corresponding directories.

CelebA 64 Run the following commands to download the CelebA images and store them in an LMDB dataset:
cd $CODE_DIR/scripts
python create_celeba64_lmdb.py --split train --img_path $DATA_DIR/celeba_org --lmdb_path $DATA_DIR/celeba64_lmdb
python create_celeba64_lmdb.py --split valid --img_path $DATA_DIR/celeba_org --lmdb_path $DATA_DIR/celeba64_lmdb
python create_celeba64_lmdb.py --split test  --img_path $DATA_DIR/celeba_org --lmdb_path $DATA_DIR/celeba64_lmdb

Above, the images will be downloaded to $DATA_DIR/celeba_org automatically and then then LMDB datasets are created at $DATA_DIR/celeba64_lmdb.

ImageNet 32x32

Run the following commands to download tfrecord files from GLOW and to convert them to LMDB datasets

mkdir -p $DATA_DIR/imagenet-oord
cd $DATA_DIR/imagenet-oord
wget https://storage.googleapis.com/glow-demo/data/imagenet-oord-tfr.tar
tar -xvf imagenet-oord-tfr.tar
cd $CODE_DIR/scripts
python convert_tfrecord_to_lmdb.py --dataset=imagenet-oord_32 --tfr_path=$DATA_DIR/imagenet-oord/mnt/host/imagenet-oord-tfr --lmdb_path=$DATA_DIR/imagenet-oord/imagenet-oord-lmdb_32 --split=train
python convert_tfrecord_to_lmdb.py --dataset=imagenet-oord_32 --tfr_path=$DATA_DIR/imagenet-oord/mnt/host/imagenet-oord-tfr --lmdb_path=$DATA_DIR/imagenet-oord/imagenet-oord-lmdb_32 --split=validation
CelebA HQ 256

Run the following commands to download tfrecord files from GLOW and to convert them to LMDB datasets

mkdir -p $DATA_DIR/celeba
cd $DATA_DIR/celeba
wget https://storage.googleapis.com/glow-demo/data/celeba-tfr.tar
tar -xvf celeba-tfr.tar
cd $CODE_DIR/scripts
python convert_tfrecord_to_lmdb.py --dataset=celeba --tfr_path=$DATA_DIR/celeba/celeba-tfr --lmdb_path=$DATA_DIR/celeba/celeba-lmdb --split=train
python convert_tfrecord_to_lmdb.py --dataset=celeba --tfr_path=$DATA_DIR/celeba/celeba-tfr --lmdb_path=$DATA_DIR/celeba/celeba-lmdb --split=validation
FFHQ 256

Visit this Google drive location and download images1024x1024.zip. Run the following commands to unzip the images and to store them in LMDB datasets:

mkdir -p $DATA_DIR/ffhq
unzip images1024x1024.zip -d $DATA_DIR/ffhq/
cd $CODE_DIR/scripts
python create_ffhq_lmdb.py --ffhq_img_path=$DATA_DIR/ffhq/images1024x1024/ --ffhq_lmdb_path=$DATA_DIR/ffhq/ffhq-lmdb --split=train
python create_ffhq_lmdb.py --ffhq_img_path=$DATA_DIR/ffhq/images1024x1024/ --ffhq_lmdb_path=$DATA_DIR/ffhq/ffhq-lmdb --split=validation
LSUN

We use LSUN datasets in our follow-up works. Visit LSUN for instructions on how to download this dataset. Since the LSUN scene datasets come in the LMDB format, they are ready to be loaded using torchvision data loaders.

Running the main NVAE training and evaluation scripts

We use the following commands on each dataset for training NVAEs on each dataset for Table 1 in the paper. In all the datasets but MNIST normalizing flows are enabled. Check Table 6 in the paper for more information on training details. Note that for the multinode training (more than 8-GPU experiments), we use the mpirun command to run the training scripts on multiple nodes. Please adjust the commands below according to your setup. Below IP_ADDR is the IP address of the machine that will host the process with rank 0 (see here). NODE_RANK is the index of each node among all the nodes that are running the job.

MNIST

Two 16-GB V100 GPUs are used for training NVAE on dynamically binarized MNIST. Training takes about 21 hours.

export EXPR_ID=UNIQUE_EXPR_ID
export DATA_DIR=PATH_TO_DATA_DIR
export CHECKPOINT_DIR=PATH_TO_CHECKPOINT_DIR
export CODE_DIR=PATH_TO_CODE_DIR
cd $CODE_DIR
python train.py --data $DATA_DIR/mnist --root $CHECKPOINT_DIR --save $EXPR_ID --dataset mnist --batch_size 200 \
        --epochs 400 --num_latent_scales 2 --num_groups_per_scale 10 --num_postprocess_cells 3 --num_preprocess_cells 3 \
        --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 --num_latent_per_group 20 --num_preprocess_blocks 2 \
        --num_postprocess_blocks 2 --weight_decay_norm 1e-2 --num_channels_enc 32 --num_channels_dec 32 --num_nf 0 \
        --ada_groups --num_process_per_node 2 --use_se --res_dist --fast_adamax 
CIFAR-10

Eight 16-GB V100 GPUs are used for training NVAE on CIFAR-10. Training takes about 55 hours.

export EXPR_ID=UNIQUE_EXPR_ID
export DATA_DIR=PATH_TO_DATA_DIR
export CHECKPOINT_DIR=PATH_TO_CHECKPOINT_DIR
export CODE_DIR=PATH_TO_CODE_DIR
cd $CODE_DIR
python train.py --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR --save $EXPR_ID --dataset cifar10 \
        --num_channels_enc 128 --num_channels_dec 128 --epochs 400 --num_postprocess_cells 2 --num_preprocess_cells 2 \
        --num_latent_scales 1 --num_latent_per_group 20 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 \
        --num_preprocess_blocks 1 --num_postprocess_blocks 1 --num_groups_per_scale 30 --batch_size 32 \
        --weight_decay_norm 1e-2 --num_nf 1 --num_process_per_node 8 --use_se --res_dist --fast_adamax 
CelebA 64

Eight 16-GB V100 GPUs are used for training NVAE on CelebA 64. Training takes about 92 hours.

export EXPR_ID=UNIQUE_EXPR_ID
export DATA_DIR=PATH_TO_DATA_DIR
export CHECKPOINT_DIR=PATH_TO_CHECKPOINT_DIR
export CODE_DIR=PATH_TO_CODE_DIR
cd $CODE_DIR
python train.py --data $DATA_DIR/celeba64_lmdb --root $CHECKPOINT_DIR --save $EXPR_ID --dataset celeba_64 \
        --num_channels_enc 64 --num_channels_dec 64 --epochs 90 --num_postprocess_cells 2 --num_preprocess_cells 2 \
        --num_latent_scales 3 --num_latent_per_group 20 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 \
        --num_preprocess_blocks 1 --num_postprocess_blocks 1 --weight_decay_norm 1e-1 --num_groups_per_scale 20 \
        --batch_size 16 --num_nf 1 --ada_groups --num_process_per_node 8 --use_se --res_dist --fast_adamax
ImageNet 32x32

24 16-GB V100 GPUs are used for training NVAE on ImageNet 32x32. Training takes about 70 hours.

export EXPR_ID=UNIQUE_EXPR_ID
export DATA_DIR=PATH_TO_DATA_DIR
export CHECKPOINT_DIR=PATH_TO_CHECKPOINT_DIR
export CODE_DIR=PATH_TO_CODE_DIR
export IP_ADDR=IP_ADDRESS
export NODE_RANK=NODE_RANK_BETWEEN_0_TO_2
cd $CODE_DIR
mpirun --allow-run-as-root -np 3 -npernode 1 bash -c \
        'python train.py --data $DATA_DIR/imagenet-oord/imagenet-oord-lmdb_32 --root $CHECKPOINT_DIR --save $EXPR_ID --dataset imagenet_32 \
        --num_channels_enc 192 --num_channels_dec 192 --epochs 45 --num_postprocess_cells 2 --num_preprocess_cells 2 \
        --num_latent_scales 1 --num_latent_per_group 20 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 \
        --num_preprocess_blocks 1 --num_postprocess_blocks 1 --num_groups_per_scale 28 \
        --batch_size 24 --num_nf 1 --warmup_epochs 1 \
        --weight_decay_norm 1e-2 --weight_decay_norm_anneal --weight_decay_norm_init 1e0 \
        --num_process_per_node 8 --use_se --res_dist \
        --fast_adamax --node_rank $NODE_RANK --num_proc_node 3 --master_address $IP_ADDR '
CelebA HQ 256

24 32-GB V100 GPUs are used for training NVAE on CelebA HQ 256. Training takes about 94 hours.

export EXPR_ID=UNIQUE_EXPR_ID
export DATA_DIR=PATH_TO_DATA_DIR
export CHECKPOINT_DIR=PATH_TO_CHECKPOINT_DIR
export CODE_DIR=PATH_TO_CODE_DIR
export IP_ADDR=IP_ADDRESS
export NODE_RANK=NODE_RANK_BETWEEN_0_TO_2
cd $CODE_DIR
mpirun --allow-run-as-root -np 3 -npernode 1 bash -c \
        'python train.py --data $DATA_DIR/celeba/celeba-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID --dataset celeba_256 \
        --num_channels_enc 30 --num_channels_dec 30 --epochs 300 --num_postprocess_cells 2 --num_preprocess_cells 2 \
        --num_latent_scales 5 --num_latent_per_group 20 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 \
        --num_preprocess_blocks 1 --num_postprocess_blocks 1 --weight_decay_norm 1e-2 --num_groups_per_scale 16 \
        --batch_size 4 --num_nf 2 --ada_groups --min_groups_per_scale 4 \
        --weight_decay_norm_anneal --weight_decay_norm_init 1. --num_process_per_node 8 --use_se --res_dist \
        --fast_adamax --num_x_bits 5 --node_rank $NODE_RANK --num_proc_node 3 --master_address $IP_ADDR '

In our early experiments, a smaller model with 24 channels instead of 30, could be trained on only 8 GPUs in the same time (with the batch size of 6). The smaller models obtain only 0.01 bpd higher negative log-likelihood.

FFHQ 256

24 32-GB V100 GPUs are used for training NVAE on FFHQ 256. Training takes about 160 hours.

export EXPR_ID=UNIQUE_EXPR_ID
export DATA_DIR=PATH_TO_DATA_DIR
export CHECKPOINT_DIR=PATH_TO_CHECKPOINT_DIR
export CODE_DIR=PATH_TO_CODE_DIR
export IP_ADDR=IP_ADDRESS
export NODE_RANK=NODE_RANK_BETWEEN_0_TO_2
cd $CODE_DIR
mpirun --allow-run-as-root -np 3 -npernode 1 bash -c \
        'python train.py --data $DATA_DIR/ffhq/ffhq-lmdb --root $CHECKPOINT_DIR --save $EXPR_ID --dataset ffhq \
        --num_channels_enc 30 --num_channels_dec 30 --epochs 200 --num_postprocess_cells 2 --num_preprocess_cells 2 \
        --num_latent_scales 5 --num_latent_per_group 20 --num_cell_per_cond_enc 2 --num_cell_per_cond_dec 2 \
        --num_preprocess_blocks 1 --num_postprocess_blocks 1 --weight_decay_norm 1e-1  --num_groups_per_scale 16 \
        --batch_size 4 --num_nf 2  --ada_groups --min_groups_per_scale 4 \
        --weight_decay_norm_anneal --weight_decay_norm_init 1. --num_process_per_node 8 --use_se --res_dist \
        --fast_adamax --num_x_bits 5 --learning_rate 8e-3 --node_rank $NODE_RANK --num_proc_node 3 --master_address $IP_ADDR '

In our early experiments, a smaller model with 24 channels instead of 30, could be trained on only 8 GPUs in the same time (with the batch size of 6). The smaller models obtain only 0.01 bpd higher negative log-likelihood.

If for any reason your training is stopped, use the exact same commend with the addition of --cont_training to continue training from the last saved checkpoint. If you observe NaN, continuing the training using this flag usually will not fix the NaN issue.

Known Issues

Cannot build CelebA 64 or training gives NaN right at the beginning on this dataset

Several users have reported issues building CelebA 64 or have encountered NaN at the beginning of training on this dataset. If you face similar issues on this dataset, you can download this dataset manually and build LMDBs using instructions on this issue #2 .

Getting NaN after a few epochs of training

One of the main challenges in training very deep hierarchical VAEs is training instability that we discussed in the paper. We have verified that the settings in the commands above can be trained in a stable way. If you modify the settings above and you encounter NaN after a few epochs of training, you can use these tricks to stabilize your training: i) increase the spectral regularization coefficient, --weight_decay_norm. ii) Use exponential decay on --weight_decay_norm using --weight_decay_norm_anneal and --weight_decay_norm_init. iii) Decrease learning rate.

Training freezes with no NaN

In some very rare cases, we observed that training freezes after 2-3 days of training. We believe the root cause of this is because of a racing condition that is happening in one of the low-level libraries. If for any reason the training is stopped, kill your current run, and use the exact same commend with the addition of --cont_training to continue training from the last saved checkpoint.

Monitoring the training progress

While running any of the commands above, you can monitor the training progress using Tensorboard:

Click here
tensorboard --logdir $CHECKPOINT_DIR/eval-$EXPR_ID/

Above, $CHECKPOINT_DIR and $EXPR_ID are the same variables used for running the main training script.

Post-training sampling, evaluation, and checkpoints

Evaluating Log-Likelihood

You can use the following command to load a trained model and evaluate it on the test datasets:

cd $CODE_DIR
python evaluate.py --checkpoint $CHECKPOINT_DIR/eval-$EXPR_ID/checkpoint.pt --data $DATA_DIR/mnist --eval_mode=evaluate --num_iw_samples=1000

Above, --num_iw_samples indicates the number of importance weighted samples used in evaluation. $CHECKPOINT_DIR and $EXPR_ID are the same variables used for running the main training script. Set --data to the same argument that was used when training NVAE (our example is for MNIST).

Sampling

You can also use the following command to generate samples from a trained model:

cd $CODE_DIR
python evaluate.py --checkpoint $CHECKPOINT_DIR/eval-$EXPR_ID/checkpoint.pt --eval_mode=sample --temp=0.6 --readjust_bn

where --temp sets the temperature used for sampling and --readjust_bn enables readjustment of the BN statistics as described in the paper. If you remove --readjust_bn, the sampling will proceed with BN layer in the eval mode (i.e., BN layers will use running mean and variances extracted during training).

Computing FID

You can compute the FID score using 50K samples. To do so, you will need to create a mean and covariance statistics file on the training data using a command like:

cd $CODE_DIR
python scripts/precompute_fid_statistics.py --data $DATA_DIR/cifar10 --dataset cifar10 --fid_dir /tmp/fid-stats/

The command above computes the references statistics on the CIFAR-10 dataset and stores them in the --fid_dir durectory. Given the reference statistics file, we can run the following command to compute the FID score:

cd $CODE_DIR
python evaluate.py --checkpoint $CHECKPOINT_DIR/eval-$EXPR_ID/checkpoint.pt --data $DATA_DIR/cifar10 --eval_mode=evaluate_fid  --fid_dir /tmp/fid-stats/ --temp=0.6 --readjust_bn

where --temp sets the temperature used for sampling and --readjust_bn enables readjustment of the BN statistics as described in the paper. If you remove --readjust_bn, the sampling will proceed with BN layer in the eval mode (i.e., BN layers will use running mean and variances extracted during training). Above, $CHECKPOINT_DIR and $EXPR_ID are the same variables used for running the main training script. Set --data to the same argument that was used when training NVAE (our example is for MNIST).

Checkpoints

We provide checkpoints on MNIST, CIFAR-10, CelebA 64, CelebA HQ 256, FFHQ in this Google drive directory. For CIFAR10, we provide two checkpoints as we observed that a multiscale NVAE provides better qualitative results than a single scale model on this dataset. The multiscale model is only slightly worse in terms of log-likelihood (0.01 bpd). We also observe that one of our early models on CelebA HQ 256 with 0.01 bpd worse likelihood generates much better images in low temperature on this dataset.

You can use the commands above to evaluate or sample from these checkpoints.

How to construct smaller NVAE models

In the commands above, we are constructing big NVAE models that require several days of training in most cases. If you'd like to construct smaller NVAEs, you can use these tricks:

  • Reduce the network width: --num_channels_enc and --num_channels_dec are controlling the number of initial channels in the bottom-up and top-down networks respectively. Recall that we halve the number of channels with every spatial downsampling layer in the bottom-up network, and we double the number of channels with every upsampling layer in the top-down network. By reducing --num_channels_enc and --num_channels_dec, you can reduce the overall width of the networks.

  • Reduce the number of residual cells in the hierarchy: --num_cell_per_cond_enc and --num_cell_per_cond_dec control the number of residual cells used between every latent variable group in the bottom-up and top-down networks respectively. In most of our experiments, we are using two cells per group for both networks. You can reduce the number of residual cells to one to make the model smaller.

  • Reduce the number of epochs: You can reduce the training time by reducing --epochs.

  • Reduce the number of groups: You can make NVAE smaller by using a smaller number of latent variable groups. We use two schemes for setting the number of groups:

    1. An equal number of groups: This is set by --num_groups_per_scale which indicates the number of groups in each scale of latent variables. Reduce this number to have a small NVAE.

    2. An adaptive number of groups: This is enabled by --ada_groups. In this case, the highest resolution of latent variables will have --num_groups_per_scale groups and the smaller scales will get half the number of groups successively (see groups_per_scale in utils.py). We don't let the number of groups go below --min_groups_per_scale. You can reduce the total number of groups by reducing --num_groups_per_scale and --min_groups_per_scale when --ada_groups is enabled.

Understanding the implementation

If you are modifying the code, you can use the following figure to map the code to the paper.

Traversing the latent space

We can generate images by traversing in the latent space of NVAE. This sequence is generated using our model trained on CelebA HQ, by interpolating between samples generated with temperature 0.6. Some artifacts are due to color quantization in GIFs.

License

Please check the LICENSE file. NVAE may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact [email protected].

You should take into consideration that VAEs are trained to mimic the training data distribution, and, any bias introduced in data collection will make VAEs generate samples with a similar bias. Additional bias could be introduced during model design, training, or when VAEs are sampled using small temperatures. Bias correction in generative learning is an active area of research, and we recommend interested readers to check this area before building applications using NVAE.

Bibtex:

Please cite our paper, if you happen to use this codebase:

@inproceedings{vahdat2020NVAE,
  title={{NVAE}: A Deep Hierarchical Variational Autoencoder},
  author={Vahdat, Arash and Kautz, Jan},
  booktitle={Neural Information Processing Systems (NeurIPS)},
  year={2020}
}

More Repositories

1

instant-ngp

Instant neural graphics primitives: lightning fast NeRF and more
Cuda
15,102
star
2

stylegan

StyleGAN - Official TensorFlow Implementation
Python
13,882
star
3

stylegan2

StyleGAN2 - Official TensorFlow Implementation
Python
10,740
star
4

SPADE

Semantic Image Synthesis with SPADE
Python
7,518
star
5

stylegan3

Official PyTorch implementation of StyleGAN3
Python
6,108
star
6

neuralangelo

Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
Python
4,125
star
7

imaginaire

NVIDIA's Deep Imagination Team's PyTorch Library
Python
3,941
star
8

stylegan2-ada-pytorch

StyleGAN2-ADA - Official PyTorch implementation
Python
3,866
star
9

ffhq-dataset

Flickr-Faces-HQ Dataset (FFHQ)
Python
3,483
star
10

tiny-cuda-nn

Lightning fast C++/CUDA neural network framework
C++
3,286
star
11

eg3d

Python
3,089
star
12

MUNIT

Multimodal Unsupervised Image-to-Image Translation
Python
2,564
star
13

SegFormer

Official PyTorch implementation of SegFormer
Python
2,252
star
14

nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Python
2,019
star
15

few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Python
1,780
star
16

stylegan2-ada

StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
Python
1,778
star
17

FUNIT

Translate images to unseen domains in the test time with few example images.
Python
1,545
star
18

PWC-Net

PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018 (Oral)
Python
1,512
star
19

noise2noise

Noise2Noise: Learning Image Restoration without Clean Data - Official TensorFlow implementation of the ICML 2018 paper
Python
1,356
star
20

alias-free-gan

Alias-Free GAN project website and code
1,320
star
21

prismer

The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
Python
1,287
star
22

DG-Net

👫 Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral) 👫
Python
1,268
star
23

nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
C++
1,137
star
24

edm

Elucidating the Design Space of Diffusion-Based Generative Models (EDM)
Python
1,014
star
25

Deep_Object_Pose

Deep Object Pose Estimation (DOPE) – ROS inference (CoRL 2018)
Python
955
star
26

VoxFormer

Official PyTorch implementation of VoxFormer [CVPR 2023 Highlight]
Python
937
star
27

BundleSDF

[CVPR 2023] BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects
Python
842
star
28

ODISE

Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]
Python
779
star
29

GroupViT

Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.
Python
679
star
30

FasterViT

[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Python
664
star
31

GA3C

Hybrid CPU/GPU implementation of the A3C algorithm for deep reinforcement learning.
Python
641
star
32

denoising-diffusion-gan

Tackling the Generative Learning Trilemma with Denoising Diffusion GANs https://arxiv.org/abs/2112.07804
Python
634
star
33

genvs

610
star
34

sionna

Sionna: An Open-Source Library for Next-Generation Physical Layer Research
Jupyter Notebook
580
star
35

curobo

CUDA Accelerated Robot Library
Python
545
star
36

FB-BEV

Official PyTorch implementation of FB-BEV & FB-OCC - Forward-backward view transformation for vision-centric autonomous driving perception
Python
518
star
37

Dancing2Music

Python
513
star
38

planercnn

PlaneRCNN detects and reconstructs piece-wise planar surfaces from a single RGB image
Python
502
star
39

pacnet

Pixel-Adaptive Convolutional Neural Networks (CVPR '19)
Python
490
star
40

CALM

Python
486
star
41

DeepInversion

Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)
Python
474
star
42

EmerNeRF

PyTorch Implementation of EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision
Python
456
star
43

FAN

Official PyTorch implementation of Fully Attentional Networks
Python
454
star
44

FourCastNet

Initial public release of code, data, and model weights for FourCastNet
Python
421
star
45

GCVit

[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
Python
414
star
46

intrinsic3d

Intrinsic3D - High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting (ICCV 2017)
C++
411
star
47

nvdiffmodeling

Differentiable rasterization applied to 3D model simplification tasks
Python
404
star
48

flip

A tool for visualizing and communicating the errors in rendered images.
C++
375
star
49

wetectron

Weakly-supervised object detection.
Python
355
star
50

FoundationPose

FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
JavaScript
349
star
51

nvdiffrecmc

Official code for the NeurIPS 2022 paper "Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising".
C
341
star
52

geomapnet

Geometry-Aware Learning of Maps for Camera Localization (CVPR2018)
Python
338
star
53

GLAMR

[CVPR 2022 Oral] Official PyTorch Implementation of "GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras”.
Python
329
star
54

LSGM

The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)
Python
326
star
55

ssn_superpixels

Superpixel Sampling Networks (ECCV2018)
Python
323
star
56

DiffiT

Official Repository for DiffiT: Diffusion Vision Transformers for Image Generation
315
star
57

FreeSOLO

FreeSOLO for unsupervised instance segmentation, CVPR 2022
Python
307
star
58

long-video-gan

Official PyTorch implementation of LongVideoGAN
Python
297
star
59

neuralrgbd

Neural RGB→D Sensing: Per-pixel depth and its uncertainty estimation from a monocular RGB video
Python
294
star
60

selfsupervised-denoising

High-Quality Self-Supervised Deep Image Denoising - Official TensorFlow implementation of the NeurIPS 2019 paper
Python
293
star
61

Taylor_pruning

Pruning Neural Networks with Taylor criterion in Pytorch
Python
279
star
62

timeloop

Timeloop performs modeling, mapping and code-generation for tensor algebra workloads on various accelerator architectures.
C++
278
star
63

metfaces-dataset

Python
272
star
64

few_shot_gaze

Pytorch implementation and demo of FAZE: Few-Shot Adaptive Gaze Estimation (ICCV 2019, oral)
Python
272
star
65

splatnet

SPLATNet: Sparse Lattice Networks for Point Cloud Processing (CVPR2018)
Python
268
star
66

MinVIS

Python
261
star
67

edm2

Analyzing and Improving the Training Dynamics of Diffusion Models (EDM2)
Python
261
star
68

contact_graspnet

Efficient 6-DoF Grasp Generation in Cluttered Scenes
Python
260
star
69

CenterPose

Single-Stage Keypoint-based Category-level Object Pose Estimation from an RGB Image (ICRA 2022)
Python
251
star
70

trajdata

A unified interface to many trajectory forecasting datasets.
Python
245
star
71

STEP

STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Python
244
star
72

matchlib

SystemC/C++ library of commonly-used hardware functions and components for HLS.
C++
235
star
73

sim-web-visualizer

Web Based Visualizer for Simulation Environments
Python
231
star
74

SCOPS

SCOPS: Self-Supervised Co-Part Segmentation (CVPR'19)
Python
221
star
75

UMR

Self-supervised Single-view 3D Reconstruction
Python
221
star
76

DiffRL

[ICLR 2022] Accelerated Policy Learning with Parallel Differentiable Simulation
Python
220
star
77

cule

CuLE: A CUDA port of the Atari Learning Environment (ALE)
C++
216
star
78

SSV

Pytorch implementation of SSV: Self-Supervised Viewpoint Learning from Image Collections (CVPR 2020)
Python
214
star
79

DiffPure

A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations.
Python
210
star
80

latentfusion

LatentFusion: End-to-End Differentiable Reconstruction and Rendering for Unseen Object Pose Estimation
Python
197
star
81

I2SB

Python
194
star
82

nvbio

NVBIO is a library of reusable components designed to accelerate bioinformatics applications using CUDA.
C++
193
star
83

6dof-graspnet

Implementation of 6-DoF GraspNet with tensorflow and python. This repo has been tested with python 2.7 and tensorflow 1.12.
Python
186
star
84

NVBit

183
star
85

AFNO-transformer

Adaptive FNO transformer - official Pytorch implementation
Python
174
star
86

UnseenObjectClustering

Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation
Python
166
star
87

AL-MDN

Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)
Python
159
star
88

fermat

Fermat is a high performance research oriented physically based rendering system, trying to produce beautiful pictures following the mathematician’s principle of least time
C++
158
star
89

PoseCNN-PyTorch

PyTorch implementation of the PoseCNN framework
C
156
star
90

mask-auto-labeler

Python
153
star
91

mimicgen_environments

This code corresponds to simulation environments used as part of the MimicGen project.
Python
153
star
92

Bi3D

Python
150
star
93

RVT

Official Code for RVT: Robotic View Transformer for 3D Object Manipulation
Python
147
star
94

condensa

Programmable Neural Network Compression
Python
146
star
95

traffic-behavior-simulation

Python
145
star
96

learningrigidity

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation (ECCV 2018)
Python
144
star
97

ocrodeg

document image degradation
Jupyter Notebook
142
star
98

ocropus3

Repository collecting all the submodules for the new PyTorch-based OCR System.
Shell
141
star
99

CGBN

CGBN: CUDA Accelerated Multiple Precision Arithmetic (Big Num) using Cooperative Groups
Cuda
139
star
100

PL4NN

Perceptual Losses for Neural Networks: Caffe implementation of loss layers based on perceptual image quality metrics.
Python
138
star