• Stars
    star
    633
  • Rank 71,037 (Top 2 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 4 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Deep Learning Experiment Management

runx - An experiment management tool

runx helps to automate common tasks while doing research:

  • hyperparameter sweeps
  • logging, tensorboard, checkpoint management
  • experiment summarization
  • code checkpointing
  • unique, per-run, directory creation

Table of Contents

Quick-start Installation

Install with pip:

> pip install runx

Install with source:

> git clone https://github.com/NVIDIA/runx
> cd runx
> python setup.py .

Introduction example

Suppose you have an existing project that you call as follows:

> python train.py --lr 0.01 --solver sgd

To run a hyperparameter sweep, you'd normally have to code up a one-off script to generate the sweep. But with runx, you would simply define a yaml that defines lists of hyperparams that you'd like to use.

Start by creating a yaml file called sweep.yml:

CMD: 'python train.py'

HPARAMS:
  lr: [0.01, 0.02]
  solver: ['sgd', 'adam']

Now you can run the sweep with runx:

 > python -m runx.runx sweep.yml -i

python train.py --lr 0.01 --solver sgd
python train.py --lr 0.01 --solver adam
python train.py --lr 0.02 --solver sgd
python train.py --lr 0.02 --solver adam 

You can see that runx automatically computes the cross product of all hyperparameters, which in this case results in 4 runs. It then builds commandlines by concatenating the hyperparameters with the training command.

A few useful runx options:

-n     don't run, just print the command
-i     interactive mode (as opposed to submitting jobs to a farm)

runx is especially useful to launch batch jobs to a farm.

Farm support is simple. First create a .runx file that configures the farm:

LOGROOT: /home/logs
FARM: bigfarm

bigfarm:
  SUBMIT_CMD: 'submit_job'
  RESOURCES:
     gpu: 2
     cpu: 16
     mem: 128

LOGROOT: this is where the output of runs should go FARM: you can define multiple farm targets. This selects which one to use SUBMIT_CMD: the script you use to launch jobs to a farm RESOURCES: the arguments to present to SUBMIT_CMD

Now when you run runx, it will generate commands that will attempt to launch jobs to a farm using your SUBMIT_CMD. Notice that we left out the -i cmdline arg because now we want to submit jobs and not run them interactively.

> python -m runx.runx sweep.yml

submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver sgd"
submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver adam"
submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver sgd"
submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver adam"

Unique run directories

We want the results for each training run to go into a unique output/log directory. We don't want things like tensorboard files or logfiles to write over each other. runx solves this problem by automatically generating a unique output directory per run.

You have access to this unique directory name within your experiment yaml via the special variable: LOGDIR. Your training script may use this path and write its output there.

CMD: 'python train.py'

HPARAMS:
  lr: [0.01, 0.02]
  solver: ['sgd', 'adam']
  logdir: LOGDIR

In the above experiment yaml, we have passed LOGDIR as an argument to your training script. When we launch the jobs, runx automatically generates unique output directories and passes the paths to your training script:

> python -m runx.runx sweep.yml

submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver sgd  --logdir /home/logs/athletic-wallaby_2020.02.06_14.19"
submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver adam  --logdir /home/logs/industrious-chicken_2020.02.06_14.19"
submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver sgd  --logdir /home/logs/arrogant-buffalo_2020.02.06_14.19"
submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver adam  --logdir /home/logs/vengeful-jaguar_2020.02.06_14.19"

Summarization with sumx

After you've run your experiment, you will likely want to summarize the results. You might want to know:

  • Which training run was best?
  • How long was an epoch
  • What about other metrics?

You summarize your runs with on the commandline with sumx. All you need to do is tell sumx which experiment you want summarized. sumx knows what your LOGROOT (it'll get that from the .runx file) and so it looks within that directory for your experiment directory.

In the following example, we ask sumx to summarize the sweep experiment.

> python -m runx.sumx sweep --sortwith acc

        lr    solver  acc   epoch  epoch_time
------	----  ------  ----  -----  ----------
run4    0.02  adam    99.1   10     5:11
run3    0.02  sgd     99.0   10     5:05
run1    0.01  sgd     98.2   10     5:15
run2    0.01  adam    98.1   10     5:12

sumx is part of the runx suite, and is able to summarize the different hyperparmeters used as well as the metrics/results of your runs. Notice that we used the --sortwith feature of sumx, which sorts your results so you can easily locate your best runs.

This is the basic idea. The following sections will go into more details about all the various features.

runx Architecture

runx consists of three main modules:

  • runx
    • Launch sweeps of training runs using a concise yaml format that allows for multiple values for each hyperparameter
    • In particular, when you call runx:
      • Calculate cross product of all hyperparameters -> runs
      • For each run, create an output directory, copy your code there, and then launch the training command
  • logx
    • Logging of metrics, messages, checkpoints, tensorboard
  • sumx
    • Summarize the results of training runs, showing results and unique hyperparameters

These modules are intended to be used jointly, but if you just want to use runx, that's fine. However using sumx requires that you've used logx to record metrics.

Create a project-specific configuration file

In order to use runx, you need to create a configuration file in the directory where you'll call the runx CLI.

The .runx file defines a number of critical fields:

  • LOGROOT - the root directory where you want your logs placed. This is a path that any farm job can write to.
  • FARM - if defined, jobs should be submitted to this farm, else run interactively
  • For a given farm, these fields are required:
    • SUBMIT_CMD - the farm submission command
    • RESOURCES - hyperparameters passed to the SUBMIT_CMD. You can list any number of these items, the ones shown below are just examples.
  • CODE_IGNORE_PATTERNS - ignore these files patterns when copying code to output directory

Here's an example of such a file:

LOGROOT: /home/logs
CODE_IGNORE_PATTERNS: '.git,*.pyc,docs*,test*'
FARM: bigfarm

# Farm resource needs
bigfarm:
    SUBMIT_CMD: 'submit_job'
    RESOURCES:
        image: mydocker-image-big:1.0
        gpu: 8
        cpu: 64
        mem: 450

smallfarm:
    SUBMIT_CMD: 'submit_small'
    RESOURCES:
        image: mydocker-image-small:1.2
        gpu: 4
        cpu: 32
        mem: 256

Run directory, logfiles

runx has two level of experiment hierarchy: experiments and runs. An experiment corresponds to a single yaml file, which may contain many runs.

runx creates both a parent experiment directory and a unique subdirectory for each run. The name of the experiment directory is LOGROOT/<experiment name>, so in the example of sweep.yml, the experiment name is sweep, derived from the yaml filename.

For example, this might be the directory structure for the sweep study:

/home/logs
  sweep/
     curious-rattlesnake_2020.02.06_14.19/
     ambitious-lobster_2020.02.06_14.19/
     ...

The individual run directories are named with a combination of coolname and date. The use of coolname makes it much easier to refer to a given run than referring to a date code.

If you include the RUNX.TAG field in your experiment yaml or if you supply the --tag argument to the runx CLI, the names will include that tag.

Staging of code

runx actually makes a copy of your code within each run's log directory. This is done for a number of reasons:

  • If you wish to continue modifying your code, while a training run is going on, you may do so without worry whether it will affect the running job(s)
  • In case your job dies and you must restart it, the code and training environment is self-contained within the logdir of a run.
  • This is also useful for documentation purposes: in case you ever want to know exactly the state of the code for a given run.

Experiment yaml details

Special variables

CMD - Your base training command. You typically don't include any args here. HPARAMS - All hyperparmeters. This is a datastructure that may either be a simple dict of params or may be a list of dicts. Furthermore, each hyperparameter may be a scalar or list or boolean. PYTHONPATH - This is field optional. For the purpose of altering the default PYTHONPATH which is simply LOGDIR/code. Can be a colon-separated list of paths. May include LOGDIR special variable.

HPARAMS

A simple example of HPARAMS is:

CMD: "python train.py"

HPARAMS:
  logdir: LOGDIR
  adam: true
  arch: alexnet
  lr: [0.01, 0.02]
  epochs: 10
  RUNX.TAG: 'alexnet'

Here, there will be 2 runs that will be created.

Booleans

If you want to specify that a boolean flag should be on or off, this is done using true and false keywords:

some_flag: [true, false]

This would result having one run with --some_flag and another run without that flag

If instead you want to pass an actual string, you could instad do the following:

  some_arg: ['True', 'False']

This would result in one run with --some_arg True and other run with --some_arg False

If you'd like an argument to not be passed into your script at all, you can set it to None

  some_arg: None

Lists, Inheritance

Oftentimes, you might want to define separate lists of hyperparameters in your experiment. For example:

  1. arch = alexnet with lr=[0.01, 0.02]
  2. arch = resnet50 with lr=[0.002, 0.005]

You can do this with hparams defined as follows:

PYTHONPATH: LOGDIR/code:LOGDIR/code/lib
CMD: "python train.py"

HPARAMS: [
  {
   logdir: LOGDIR,
   adam: true,
   arch: alexnet,
   lr: [0.01, 0.02],
   epochs: 10,
   RUNX.TAG: 'alexnet',
  },
  {
   arch: resnet50,
   lr: [0.002, 0.005],
   RUNX.TAG: 'resnet50',
  },
  {
   RUNX.SKIP: true,
   arch: resnet50,
   lr: [0.002, 0.005],
   RUNX.TAG: 'resnet50',
  }
]

You might observe that hparams is now a list of two dicts. The nice thing is that runx assumes inheritance from the first item in the list to all remaining dicts, so that you don't have to re-type all the redundant hyperparms.

When you pass this yaml to runx, you'll get the following out:

submit_job ... --name alexnet_2020.02.06_6.32  -c "python train.py --logdir ... --lr 0.01 --adam --arch alexnet --epochs 10
submit_job ... --name alexnet_2020.02.06_6.40  -c "python train.py --logdir ... --lr 0.02 --adam --arch alexnet --epochs 10
submit_job ... --name resnet50_2020.02.06_6.45 -c "python train.py --logdir ... --lr 0.002 --adam --arch resnet50 --epochs 10
submit_job ... --name resnet50_2020.02.06_6.50 -c "python train.py --logdir ... --lr 0.005 --adam --arch resnet50 --epochs 10

Because of inheritance, adam, arch, and epochs params are set identically in each run.

This is also showing the use of the magic variable RUNX.TAG, which allows you to add a tag to a subset of your experiment. This is the same as if you'd used the --tag option to runx.py, except that here you can specify the tag within the hparams data structure. The value of RUNX.TAG is not passed to your training script.

A very useful feature of RUNX.TAG is that you can reference other hyperparameters, for example:

   arch: resnet50,
   RUNX.TAG: '{arch}-lrstudy'

This results in the tag becoming resnet50-lrstudy. runx performs simple string matching and substitution when it finds curly braces.

logx - logging, tensorboarding, checkpointing

In order to use sumx, you need to export metrics with logx. logx helps to write metrics in a canonical way, so that sumx can summarize the results.

logx can also make it easy for you to output log information to a file (and stdout) logx can also manage saving of checkpoints automatically, with the benefit being that logx will keep around only the latest and best checkpoints, saving much disk space.

The basic way you use logx is to modify your training code in the following ways:

At the top of your training script (or any module that calls logx functions:

from runx.logx import logx

Before using logx, you must initialize it as follows:

   logx.initialize(logdir=args.logdir, coolname=True, tensorboard=True)

Make sure that you're only calling logx from rank=0, in the event that you're using distributed data parallel.

Then, substitute the following logx calls into your code:

From To What
print() logx.msg() stdout messages
writer.add_scalar() logx.add_scalar() tensorboard scalar writes
writer.add_image() logx.add_image() tensorboard image writes
logx.save_model() save latest/best models

Finally, in order for sumx to be able to read the results of your run, you have to push your metrics to logx. You should definitely push the 'val' metrics, but can push 'train' metrics if you like (sumx doesn't consume them at the moment).

# define which metrics to record
metrics = {'loss': test_loss, 'accuracy': accuracy}
# push the metrics to logfile
logx.metric(phase='val', metrics=metrics, epoch=epoch)

Some important points of logx.metric():

  • The phase argument describes whether the metric is a train or validation metric.
  • You should set idx == epoch for validation metrics. And for training, idx is typically the iteration count.

Here's a final feature of logx: saving of the model. This feature helps save not only the latest but also the best model.

save_dict = {'epoch': epoch + 1,
             'arch': args.arch,
             'state_dict': model.state_dict(),
             'best_acc1': best_acc1,
             'optimizer' : optimizer.state_dict()}
logx.save_model(save_dict, metric=accuracy, epoch=epoch, higher_better=True)

You do have to tell save_model whether the metric is better when it's higher or lower.

sumx - summarizing your runs

sumx summarizes the results of your runs. It requires that you've logged your metrics with logx.metric(). We chose this behavior instead of reading Tensorboard files directly because that would be much slower.

> python -m runx.sumx sweep
        lr    solver  acc   epoch  epoch_time
run4    0.02  adam    99.1  10     5:21
run3    0.02  sgd     99.0  10     5:02
run1    0.01  sgd     98.2  10     5:40
run2    0.01  adam    98.1  10     5:25

A few features worth knowing about:

  • use --sortwith to sort the output by a particular field (like accuracy) that you care about most
  • sumx tells you what epoch your run is current on
  • sumx tells you the average epoch time, which is handy if you are monitoring training speed
  • use the optional --ignore flag to limit what fields sumx prints out

NGC Support

NGC support is now standard. Your .runx file should look like the following.

LOGROOT: /path/to/logroot

FARM: ngc

ngc:
    NGC_LOGROOT: /path/to/ngc_logroot
    WORKSPACE: <your ngc workspace>
    SUBMIT_CMD: 'ngc batch run'
    RESOURCES:
       image: nvidian/pytorch:19.10-py3
       instance: dgx1v.16g.1.norm
       ace: nv-us-west-2
       result: /result

Necessary steps:

  • Fill out a path to LOGROOT, which is a client-side staging directory for the log directory
  • Create a RW NGC workspace and fill in WORKSPACE with it
  • Mount this workspace on your local machine and fill in NGC_LOGROOT with this path. When the job is launched, this is also the path used to mount the workspace on the running instance.
  • Fill out any necessary fields under RESOURCES. Recall that these parameters are passed on to the SUBMIT_CMD, which must be ngc batch run.

You should be able to launch jobs to NGC using this configuration. When jobs write their results, you should also be able to see the results in the mounted workspace, and then you should be able to run runx.sumx in order to summarize the results of those runs.

More Repositories

1

nvidia-docker

Build and run Docker containers leveraging NVIDIA GPUs
16,896
star
2

open-gpu-kernel-modules

NVIDIA Linux open GPU kernel module source
C
14,997
star
3

DeepLearningExamples

State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
Jupyter Notebook
13,339
star
4

NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Python
12,016
star
5

FastPhotoStyle

Style transfer, deep learning, feature transform
Python
11,020
star
6

TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
C++
10,618
star
7

Megatron-LM

Ongoing research training transformer models at scale
Python
10,332
star
8

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
C++
8,542
star
9

vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Python
8,482
star
10

apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
Python
8,239
star
11

pix2pixHD

Synthesizing and manipulating 2048x1024 images with conditional GANs
Python
6,488
star
12

cuda-samples

Samples for CUDA Developers which demonstrates features in CUDA Toolkit
C
6,119
star
13

cutlass

CUDA Templates for Linear Algebra Subroutines
C++
5,519
star
14

FasterTransformer

Transformer related optimization, including BERT, GPT
C++
5,313
star
15

DALI

A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
C++
5,048
star
16

thrust

[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
C++
4,914
star
17

tacotron2

Tacotron 2 - PyTorch implementation with faster-than-realtime inference
Jupyter Notebook
4,562
star
18

warp

A Python framework for high performance GPU simulation and graphics
Python
4,206
star
19

DIGITS

Deep Learning GPU Training System
HTML
4,105
star
20

NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Python
4,064
star
21

nccl

Optimized primitives for collective multi-GPU communication
C++
3,187
star
22

flownet2-pytorch

Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks
Python
2,938
star
23

ChatRTX

A developer reference project for creating Retrieval Augmented Generation (RAG) chatbots on Windows using TensorRT-LLM
TypeScript
2,635
star
24

k8s-device-plugin

NVIDIA device plugin for Kubernetes
Go
2,481
star
25

libcudacxx

[ARCHIVED] The C++ Standard Library for your entire system. See https://github.com/NVIDIA/cccl
C++
2,294
star
26

GenerativeAIExamples

Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
Python
2,192
star
27

nvidia-container-toolkit

Build and run containers leveraging NVIDIA GPUs
Go
2,171
star
28

waveglow

A Flow-based Generative Network for Speech Synthesis
Python
2,133
star
29

MinkowskiEngine

Minkowski Engine is an auto-diff neural network library for high-dimensional sparse tensors
Python
2,007
star
30

TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
Python
1,917
star
31

Stable-Diffusion-WebUI-TensorRT

TensorRT Extension for Stable Diffusion Web UI
Python
1,886
star
32

semantic-segmentation

Nvidia Semantic Segmentation monorepo
Python
1,763
star
33

gpu-operator

NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes
Go
1,735
star
34

cub

[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
Cuda
1,679
star
35

DeepRecommender

Deep learning for recommender systems
Python
1,662
star
36

stdexec

`std::execution`, the proposed C++ framework for asynchronous and parallel programming.
C++
1,554
star
37

OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Python
1,511
star
38

CUDALibrarySamples

CUDA Library Samples
Cuda
1,468
star
39

VideoProcessingFramework

Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space and pixel format conversions
C++
1,303
star
40

deepops

Tools for building GPU clusters
Shell
1,252
star
41

open-gpu-doc

Documentation of NVIDIA chip/hardware interfaces
C
1,243
star
42

aistore

AIStore: scalable storage for AI applications
Go
1,233
star
43

Q2RTX

NVIDIA’s implementation of RTX ray-tracing in Quake II
C
1,217
star
44

trt-samples-for-hackathon-cn

Simple samples for TensorRT programming
Python
1,211
star
45

cccl

CUDA Core Compute Libraries
C++
1,200
star
46

MatX

An efficient C++17 GPU numerical computing library with Python-like syntax
C++
1,187
star
47

partialconv

A New Padding Scheme: Partial Convolution based Padding
Python
1,145
star
48

sentiment-discovery

Unsupervised Language Modeling at scale for robust sentiment classification
Python
1,055
star
49

nvidia-container-runtime

NVIDIA container runtime
Makefile
1,035
star
50

modulus

Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods
Python
991
star
51

gpu-monitoring-tools

Tools for monitoring NVIDIA GPUs on Linux
C
974
star
52

jetson-gpio

A Python library that enables the use of Jetson's GPIOs
Python
898
star
53

dcgm-exporter

NVIDIA GPU metrics exporter for Prometheus leveraging DCGM
Go
886
star
54

retinanet-examples

Fast and accurate object detection with end-to-end GPU optimization
Python
885
star
55

flowtron

Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
Jupyter Notebook
867
star
56

nccl-tests

NCCL Tests
Cuda
864
star
57

cuda-python

CUDA Python Low-level Bindings
Python
859
star
58

mellotron

Mellotron: a multispeaker voice synthesis model based on Tacotron 2 GST that can make a voice emote and sing without emotive or singing training data
Jupyter Notebook
852
star
59

gdrcopy

A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology
C++
832
star
60

libnvidia-container

NVIDIA container runtime library
C
818
star
61

BigVGAN

Official PyTorch implementation of BigVGAN (ICLR 2023)
Python
806
star
62

spark-rapids

Spark RAPIDS plugin - accelerate Apache Spark with GPUs
Scala
800
star
63

nv-wavenet

Reference implementation of real-time autoregressive wavenet inference
Cuda
728
star
64

DLSS

NVIDIA DLSS is a new and improved deep learning neural network that boosts frame rates and generates beautiful, sharp images for your games
C
727
star
65

tensorflow

An Open Source Machine Learning Framework for Everyone
C++
719
star
66

gvdb-voxels

Sparse volume compute and rendering on NVIDIA GPUs
C
674
star
67

MAXINE-AR-SDK

NVIDIA AR SDK - API headers and sample applications
C
671
star
68

nvvl

A library that uses hardware acceleration to load sequences of video frames to facilitate machine learning training
C++
665
star
69

NVFlare

NVIDIA Federated Learning Application Runtime Environment
Python
630
star
70

NeMo-Aligner

Scalable toolkit for efficient model alignment
Python
564
star
71

nvcomp

Repository for nvCOMP docs and examples. nvCOMP is a library for fast lossless compression/decompression on the GPU that can be downloaded from https://developer.nvidia.com/nvcomp.
C++
545
star
72

multi-gpu-programming-models

Examples demonstrating available options to program multiple GPUs in a single node or a cluster
Cuda
535
star
73

Dataset_Synthesizer

NVIDIA Deep learning Dataset Synthesizer (NDDS)
C++
530
star
74

TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
Python
513
star
75

jitify

A single-header C++ library for simplifying the use of CUDA Runtime Compilation (NVRTC).
C++
512
star
76

nvbench

CUDA Kernel Benchmarking Library
Cuda
501
star
77

libglvnd

The GL Vendor-Neutral Dispatch library
C
501
star
78

NeMo-Curator

Scalable data pre processing and curation toolkit for LLMs
Jupyter Notebook
500
star
79

cuda-quantum

C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
C++
496
star
80

AMGX

Distributed multigrid linear solver library on GPU
Cuda
474
star
81

cuCollections

C++
470
star
82

enroot

A simple yet powerful tool to turn traditional container/OS images into unprivileged sandboxes.
Shell
459
star
83

NeMo-Framework-Launcher

Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
Python
459
star
84

hpc-container-maker

HPC Container Maker
Python
442
star
85

MDL-SDK

NVIDIA Material Definition Language SDK
C++
438
star
86

PyProf

A GPU performance profiling tool for PyTorch models
Python
437
star
87

framework-reproducibility

Providing reproducibility in deep learning frameworks
Python
424
star
88

gpu-rest-engine

A REST API for Caffe using Docker and Go
C++
421
star
89

DCGM

NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs
C++
394
star
90

NvPipe

NVIDIA-accelerated zero latency video compression library for interactive remoting applications
Cuda
390
star
91

torch-harmonics

Differentiable signal processing on the sphere for PyTorch
Jupyter Notebook
386
star
92

cuQuantum

Home for cuQuantum Python & NVIDIA cuQuantum SDK C++ samples
Jupyter Notebook
344
star
93

data-science-stack

NVIDIA Data Science stack tools
Shell
317
star
94

ai-assisted-annotation-client

Client side integration example source code and libraries for AI-Assisted Annotation SDK
C++
308
star
95

video-sdk-samples

Samples demonstrating how to use various APIs of NVIDIA Video Codec SDK
C++
301
star
96

egl-wayland

The EGLStream-based Wayland external platform
C
299
star
97

nvidia-settings

NVIDIA driver control panel
C
292
star
98

NVTX

The NVIDIA® Tools Extension SDK (NVTX) is a C-based Application Programming Interface (API) for annotating events, code ranges, and resources in your applications.
C
290
star
99

go-nvml

Go Bindings for the NVIDIA Management Library (NVML)
C
288
star
100

gpu-feature-discovery

GPU plugin to the node feature discovery for Kubernetes
Go
286
star