• Stars
    star
    3,627
  • Rank 12,208 (Top 0.3 %)
  • Language
    C++
  • License
    Other
  • Created over 3 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Lightning fast C++/CUDA neural network framework

Tiny CUDA Neural Networks

This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning fast "fully fused" multi-layer perceptron (technical paper), a versatile multiresolution hash encoding (technical paper), as well as support for various other input encodings, losses, and optimizers.

Performance

Image Fully fused networks vs. TensorFlow v2.5.0 w/ XLA. Measured on 64 (solid line) and 128 (dashed line) neurons wide multi-layer perceptrons on an RTX 3090. Generated by benchmarks/bench_ours.cu and benchmarks/bench_tensorflow.py using data/config_oneblob.json.

Usage

Tiny CUDA neural networks have a simple C++/CUDA API:

#include <tiny-cuda-nn/common.h>

// Configure the model
nlohmann::json config = {
	{"loss", {
		{"otype", "L2"}
	}},
	{"optimizer", {
		{"otype", "Adam"},
		{"learning_rate", 1e-3},
	}},
	{"encoding", {
		{"otype", "HashGrid"},
		{"n_levels", 16},
		{"n_features_per_level", 2},
		{"log2_hashmap_size", 19},
		{"base_resolution", 16},
		{"per_level_scale", 2.0},
	}},
	{"network", {
		{"otype", "FullyFusedMLP"},
		{"activation", "ReLU"},
		{"output_activation", "None"},
		{"n_neurons", 64},
		{"n_hidden_layers", 2},
	}},
};

using namespace tcnn;

auto model = create_from_config(n_input_dims, n_output_dims, config);

// Train the model (batch_size must be a multiple of tcnn::BATCH_SIZE_GRANULARITY)
GPUMatrix<float> training_batch_inputs(n_input_dims, batch_size);
GPUMatrix<float> training_batch_targets(n_output_dims, batch_size);

for (int i = 0; i < n_training_steps; ++i) {
	generate_training_batch(&training_batch_inputs, &training_batch_targets); // <-- your code

	float loss;
	model.trainer->training_step(training_batch_inputs, training_batch_targets, &loss);
	std::cout << "iteration=" << i << " loss=" << loss << std::endl;
}

// Use the model
GPUMatrix<float> inference_inputs(n_input_dims, batch_size);
generate_inputs(&inference_inputs); // <-- your code

GPUMatrix<float> inference_outputs(n_output_dims, batch_size);
model.network->inference(inference_inputs, inference_outputs);

Example: learning a 2D image

We provide a sample application where an image function (x,y) -> (R,G,B) is learned. It can be run via

tiny-cuda-nn$ ./build/mlp_learning_an_image data/images/albert.jpg data/config_hash.json

producing an image every couple of training steps. Each 1000 steps should take a bit over 1 second with the default configuration on an RTX 4090.

10 steps 100 steps 1000 steps Reference image
10steps 100steps 1000steps reference

Requirements

  • An NVIDIA GPU; tensor cores increase performance when available. All shown results come from an RTX 3090.
  • A C++14 capable compiler. The following choices are recommended and have been tested:
    • Windows: Visual Studio 2019 or 2022
    • Linux: GCC/G++ 8 or higher
  • A recent version of CUDA. The following choices are recommended and have been tested:
    • Windows: CUDA 11.5 or higher
    • Linux: CUDA 10.2 or higher
  • CMake v3.21 or higher.
  • The fully fused MLP component of this framework requires a very large amount of shared memory in its default configuration. It will likely only work on an RTX 3090, an RTX 2080 Ti, or higher-end GPUs. Lower end cards must reduce the n_neurons parameter or use the CutlassMLP (better compatibility but slower) instead.

If you are using Linux, install the following packages

sudo apt-get install build-essential git

We also recommend installing CUDA in /usr/local/ and adding the CUDA installation to your PATH. For example, if you have CUDA 11.4, add the following to your ~/.bashrc

export PATH="/usr/local/cuda-11.4/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH"

Compilation (Windows & Linux)

Begin by cloning this repository and all its submodules using the following command:

$ git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
$ cd tiny-cuda-nn

Then, use CMake to build the project: (on Windows, this must be in a developer command prompt)

tiny-cuda-nn$ cmake . -B build
tiny-cuda-nn$ cmake --build build --config RelWithDebInfo -j

If compilation fails inexplicably or takes longer than an hour, you might be running out of memory. Try running the above command without -j in that case.

PyTorch extension

tiny-cuda-nn comes with a PyTorch extension that allows using the fast MLPs and input encodings from within a Python context. These bindings can be significantly faster than full Python implementations; in particular for the multiresolution hash encoding.

The overheads of Python/PyTorch can nonetheless be extensive if the batch size is small. For example, with a batch size of 64k, the bundled mlp_learning_an_image example is ~2x slower through PyTorch than native CUDA. With a batch size of 256k and higher (default), the performance is much closer.

Begin by setting up a Python 3.X environment with a recent, CUDA-enabled version of PyTorch. Then, invoke

pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Alternatively, if you would like to install from a local clone of tiny-cuda-nn, invoke

tiny-cuda-nn$ cd bindings/torch
tiny-cuda-nn/bindings/torch$ python setup.py install

Upon success, you can use tiny-cuda-nn models as in the following example:

import commentjson as json
import tinycudann as tcnn
import torch

with open("data/config_hash.json") as f:
	config = json.load(f)

# Option 1: efficient Encoding+Network combo.
model = tcnn.NetworkWithInputEncoding(
	n_input_dims, n_output_dims,
	config["encoding"], config["network"]
)

# Option 2: separate modules. Slower but more flexible.
encoding = tcnn.Encoding(n_input_dims, config["encoding"])
network = tcnn.Network(encoding.n_output_dims, n_output_dims, config["network"])
model = torch.nn.Sequential(encoding, network)

See samples/mlp_learning_an_image_pytorch.py for an example.

Components

Following is a summary of the components of this framework. The JSON documentation lists configuration options.

Networks Β  Β 
Fully fused MLP src/fully_fused_mlp.cu Lightning fast implementation of small multi-layer perceptrons (MLPs).
CUTLASS MLP src/cutlass_mlp.cu MLP based on CUTLASS' GEMM routines. Slower than fully-fused, but handles larger networks and still is reasonably fast.
Input encodings Β  Β 
Composite include/tiny-cuda-nn/encodings/composite.h Allows composing multiple encodings. Can be, for example, used to assemble the Neural Radiance Caching encoding [MΓΌller et al. 2021].
Frequency include/tiny-cuda-nn/encodings/frequency.h NeRF's [Mildenhall et al. 2020] positional encoding applied equally to all dimensions.
Grid include/tiny-cuda-nn/encodings/grid.h Encoding based on trainable multiresolution grids. Used for Instant Neural Graphics Primitives [MΓΌller et al. 2022]. The grids can be backed by hashtables, dense storage, or tiled storage.
Identity include/tiny-cuda-nn/encodings/identity.h Leaves values untouched.
Oneblob include/tiny-cuda-nn/encodings/oneblob.h From Neural Importance Sampling [MΓΌller et al. 2019] and Neural Control Variates [MΓΌller et al. 2020].
SphericalHarmonics include/tiny-cuda-nn/encodings/spherical_harmonics.h A frequency-space encoding that is more suitable to direction vectors than component-wise ones.
TriangleWave include/tiny-cuda-nn/encodings/triangle_wave.h Low-cost alternative to the NeRF's encoding. Used in Neural Radiance Caching [MΓΌller et al. 2021].
Losses Β  Β 
L1 include/tiny-cuda-nn/losses/l1.h Standard L1 loss.
Relative L1 include/tiny-cuda-nn/losses/l1.h Relative L1 loss normalized by the network prediction.
MAPE include/tiny-cuda-nn/losses/mape.h Mean absolute percentage error (MAPE). The same as Relative L1, but normalized by the target.
SMAPE include/tiny-cuda-nn/losses/smape.h Symmetric mean absolute percentage error (SMAPE). The same as Relative L1, but normalized by the mean of the prediction and the target.
L2 include/tiny-cuda-nn/losses/l2.h Standard L2 loss.
Relative L2 include/tiny-cuda-nn/losses/relative_l2.h Relative L2 loss normalized by the network prediction [Lehtinen et al. 2018].
Relative L2 Luminance include/tiny-cuda-nn/losses/relative_l2_luminance.h Same as above, but normalized by the luminance of the network prediction. Only applicable when network prediction is RGB. Used in Neural Radiance Caching [MΓΌller et al. 2021].
Cross Entropy include/tiny-cuda-nn/losses/cross_entropy.h Standard cross entropy loss. Only applicable when the network prediction is a PDF.
Variance include/tiny-cuda-nn/losses/variance_is.h Standard variance loss. Only applicable when the network prediction is a PDF.
Optimizers Β  Β 
Adam include/tiny-cuda-nn/optimizers/adam.h Implementation of Adam [Kingma and Ba 2014], generalized to AdaBound [Luo et al. 2019].
Novograd include/tiny-cuda-nn/optimizers/lookahead.h Implementation of Novograd [Ginsburg et al. 2019].
SGD include/tiny-cuda-nn/optimizers/sgd.h Standard stochastic gradient descent (SGD).
Shampoo include/tiny-cuda-nn/optimizers/shampoo.h Implementation of the 2nd order Shampoo optimizer [Gupta et al. 2018] with home-grown optimizations as well as those by Anil et al. [2020].
Average include/tiny-cuda-nn/optimizers/average.h Wraps another optimizer and computes a linear average of the weights over the last N iterations. The average is used for inference only (does not feed back into training).
Batched include/tiny-cuda-nn/optimizers/batched.h Wraps another optimizer, invoking the nested optimizer once every N steps on the averaged gradient. Has the same effect as increasing the batch size but requires only a constant amount of memory.
Composite include/tiny-cuda-nn/optimizers/composite.h Allows using several optimizers on different parameters.
EMA include/tiny-cuda-nn/optimizers/average.h Wraps another optimizer and computes an exponential moving average of the weights. The average is used for inference only (does not feed back into training).
Exponential Decay include/tiny-cuda-nn/optimizers/exponential_decay.h Wraps another optimizer and performs piecewise-constant exponential learning-rate decay.
Lookahead include/tiny-cuda-nn/optimizers/lookahead.h Wraps another optimizer, implementing the lookahead algorithm [Zhang et al. 2019].

License and Citation

This framework is licensed under the BSD 3-clause license. Please see LICENSE.txt for details.

If you use it in your research, we would appreciate a citation via

@software{tiny-cuda-nn,
	author = {M\"uller, Thomas},
	license = {BSD-3-Clause},
	month = {4},
	title = {{tiny-cuda-nn}},
	url = {https://github.com/NVlabs/tiny-cuda-nn},
	version = {1.7},
	year = {2021}
}

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Publications & Software

Among others, this framework powers the following publications:

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas MΓΌller, Alex Evans, Christoph Schied, Alexander Keller
ACM Transactions on Graphics (SIGGRAPH), July 2022
WebsiteΒ / PaperΒ / CodeΒ / VideoΒ / BibTeX

Extracting Triangular 3D Models, Materials, and Lighting From Images
Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas MΓΌller, Sanja Fidler
CVPR (Oral), June 2022
WebsiteΒ / PaperΒ / VideoΒ / BibTeX

Real-time Neural Radiance Caching for Path Tracing
Thomas MΓΌller, Fabrice Rousselle, Jan NovΓ‘k, Alexander Keller
ACM Transactions on Graphics (SIGGRAPH), August 2021
PaperΒ / GTC talkΒ / VideoΒ / Interactive results viewerΒ / BibTeX

As well as the following software:

NerfAcc: A General NeRF Accleration Toolbox
Ruilong Li, Matthew Tancik, Angjoo Kanazawa
https://github.com/KAIR-BAIR/nerfacc

Nerfstudio: A Framework for Neural Radiance Field Development
Matthew Tancik*, Ethan Weber*, Evonne Ng*, Ruilong Li, Brent Yi, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
https://github.com/nerfstudio-project/nerfstudio

Please feel free to make a pull request if your publication or software is not listed.

Acknowledgments

Special thanks go to the NRC authors for helpful discussions and to Nikolaus Binder for providing part of the infrastructure of this framework, as well as for help with utilizing TensorCores from within CUDA.

More Repositories

1

instant-ngp

Instant neural graphics primitives: lightning fast NeRF and more
Cuda
15,749
star
2

stylegan

StyleGAN - Official TensorFlow Implementation
Python
13,882
star
3

stylegan2

StyleGAN2 - Official TensorFlow Implementation
Python
10,740
star
4

SPADE

Semantic Image Synthesis with SPADE
Python
7,518
star
5

stylegan3

Official PyTorch implementation of StyleGAN3
Python
6,236
star
6

neuralangelo

Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
Python
4,316
star
7

imaginaire

NVIDIA's Deep Imagination Team's PyTorch Library
Python
3,941
star
8

stylegan2-ada-pytorch

StyleGAN2-ADA - Official PyTorch implementation
Python
3,866
star
9

ffhq-dataset

Flickr-Faces-HQ Dataset (FFHQ)
Python
3,483
star
10

eg3d

Python
3,194
star
11

MUNIT

Multimodal Unsupervised Image-to-Image Translation
Python
2,564
star
12

SegFormer

Official PyTorch implementation of SegFormer
Python
2,521
star
13

nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Python
2,080
star
14

VILA

VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin and laptops)
Python
1,849
star
15

few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Python
1,780
star
16

stylegan2-ada

StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation
Python
1,778
star
17

FUNIT

Translate images to unseen domains in the test time with few example images.
Python
1,545
star
18

PWC-Net

PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018 (Oral)
Python
1,512
star
19

noise2noise

Noise2Noise: Learning Image Restoration without Clean Data - Official TensorFlow implementation of the ICML 2018 paper
Python
1,356
star
20

nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
C++
1,348
star
21

alias-free-gan

Alias-Free GAN project website and code
1,320
star
22

edm

Elucidating the Design Space of Diffusion-Based Generative Models (EDM)
Python
1,303
star
23

prismer

The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
Python
1,297
star
24

FoundationPose

[CVPR 2024 Highlight] FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
Python
1,293
star
25

DG-Net

πŸ‘« Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral) πŸ‘«
Python
1,274
star
26

VoxFormer

Official PyTorch implementation of VoxFormer [CVPR 2023 Highlight]
Python
1,023
star
27

Deep_Object_Pose

Deep Object Pose Estimation (DOPE) – ROS inference (CoRL 2018)
Python
1,011
star
28

BundleSDF

[CVPR 2023] BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects
Python
989
star
29

NVAE

The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder" (NeurIPS 2020 spotlight paper)
Python
889
star
30

ODISE

Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]
Python
844
star
31

FasterViT

[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Python
775
star
32

MambaVision

Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
Python
742
star
33

GroupViT

Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.
Python
718
star
34

curobo

CUDA Accelerated Robot Library
Python
711
star
35

sionna

Sionna: An Open-Source Library for Next-Generation Physical Layer Research
Python
709
star
36

denoising-diffusion-gan

Tackling the Generative Learning Trilemma with Denoising Diffusion GANs https://arxiv.org/abs/2112.07804
Python
660
star
37

InstantSplat

InstantSplat: Sparse-view SfM-free Gaussian Splatting in Seconds
Python
650
star
38

GA3C

Hybrid CPU/GPU implementation of the A3C algorithm for deep reinforcement learning.
Python
649
star
39

FB-BEV

Official PyTorch implementation of FB-BEV & FB-OCC - Forward-backward view transformation for vision-centric autonomous driving perception
Python
629
star
40

genvs

625
star
41

DoRA

[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Python
574
star
42

RADIO

Official repository for "AM-RADIO: Reduce All Domains Into One"
Python
566
star
43

EmerNeRF

PyTorch Implementation of EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision
Python
554
star
44

CALM

Python
527
star
45

EAGLE

EAGLE: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Python
526
star
46

Dancing2Music

Python
513
star
47

FourCastNet

Initial public release of code, data, and model weights for FourCastNet
Python
511
star
48

planercnn

PlaneRCNN detects and reconstructs piece-wise planar surfaces from a single RGB image
Python
502
star
49

pacnet

Pixel-Adaptive Convolutional Neural Networks (CVPR '19)
Python
490
star
50

edm2

Analyzing and Improving the Training Dynamics of Diffusion Models (EDM2)
Python
489
star
51

DeepInversion

Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)
Python
485
star
52

FAN

Official PyTorch implementation of Fully Attentional Networks
Python
464
star
53

DiffiT

[ECCV 2024] Official Repository for DiffiT: Diffusion Vision Transformers for Image Generation
443
star
54

GCVit

[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
Python
423
star
55

intrinsic3d

Intrinsic3D - High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting (ICCV 2017)
C++
411
star
56

nvdiffmodeling

Differentiable rasterization applied to 3D model simplification tasks
Python
404
star
57

flip

A tool for visualizing and communicating the errors in rendered images.
C++
375
star
58

nvdiffrecmc

Official code for the NeurIPS 2022 paper "Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising".
C
362
star
59

wetectron

Weakly-supervised object detection.
Python
355
star
60

GLAMR

[CVPR 2022 Oral] Official PyTorch Implementation of "GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras”.
Python
351
star
61

geomapnet

Geometry-Aware Learning of Maps for Camera Localization (CVPR2018)
Python
338
star
62

LSGM

The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)
Python
338
star
63

timeloop

Timeloop performs modeling, mapping and code-generation for tensor algebra workloads on various accelerator architectures.
C++
325
star
64

ssn_superpixels

Superpixel Sampling Networks (ECCV2018)
Python
323
star
65

FreeSOLO

FreeSOLO for unsupervised instance segmentation, CVPR 2022
Python
313
star
66

long-video-gan

Official PyTorch implementation of LongVideoGAN
Python
308
star
67

trajdata

A unified interface to many trajectory forecasting datasets.
Python
301
star
68

contact_graspnet

Efficient 6-DoF Grasp Generation in Cluttered Scenes
Python
295
star
69

neuralrgbd

Neural RGB→D Sensing: Per-pixel depth and its uncertainty estimation from a monocular RGB video
Python
294
star
70

selfsupervised-denoising

High-Quality Self-Supervised Deep Image Denoising - Official TensorFlow implementation of the NeurIPS 2019 paper
Python
293
star
71

CF-3DGS

Python
286
star
72

sim-web-visualizer

Web Based Visualizer for Simulation Environments
Python
280
star
73

Taylor_pruning

Pruning Neural Networks with Taylor criterion in Pytorch
Python
279
star
74

mimicgen

This code corresponds to simulation environments used as part of the MimicGen project.
Python
275
star
75

metfaces-dataset

Python
272
star
76

few_shot_gaze

Pytorch implementation and demo of FAZE: Few-Shot Adaptive Gaze Estimation (ICCV 2019, oral)
Python
272
star
77

Hydra-MDP

269
star
78

splatnet

SPLATNet: Sparse Lattice Networks for Point Cloud Processing (CVPR2018)
Python
268
star
79

VILA-archive

VILA - A multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin and laptops)
Python
267
star
80

RVT

Official Code for RVT-2 and RVT
Jupyter Notebook
265
star
81

MinVIS

Python
264
star
82

CenterPose

Single-Stage Keypoint-based Category-level Object Pose Estimation from an RGB Image (ICRA 2022)
Python
262
star
83

matchlib

SystemC/C++ library of commonly-used hardware functions and components for HLS.
C++
255
star
84

Minitron

A family of compressed models obtained via pruning and knowledge distillation
252
star
85

DiffRL

[ICLR 2022] Accelerated Policy Learning with Parallel Differentiable Simulation
Python
249
star
86

DiffPure

A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations.
Python
249
star
87

STEP

STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Python
244
star
88

I2SB

Python
235
star
89

SCOPS

SCOPS: Self-Supervised Co-Part Segmentation (CVPR'19)
Python
221
star
90

UMR

Self-supervised Single-view 3D Reconstruction
Python
221
star
91

cule

CuLE: A CUDA port of the Atari Learning Environment (ALE)
C++
216
star
92

SSV

Pytorch implementation of SSV: Self-Supervised Viewpoint Learning from Image Collections (CVPR 2020)
Python
214
star
93

NVBit

210
star
94

AFNO-transformer

Adaptive FNO transformer - official Pytorch implementation
Python
207
star
95

6dof-graspnet

Implementation of 6-DoF GraspNet with tensorflow and python. This repo has been tested with python 2.7 and tensorflow 1.12.
Python
205
star
96

latentfusion

LatentFusion: End-to-End Differentiable Reconstruction and Rendering for Unseen Object Pose Estimation
Python
197
star
97

nvbio

NVBIO is a library of reusable components designed to accelerate bioinformatics applications using CUDA.
C++
193
star
98

OmniDrive

Python
190
star
99

UnseenObjectClustering

Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation
Python
175
star
100

traffic-behavior-simulation

Python
173
star